Nixon's Wet Dream
A centralized database of every American. Fully searchable. No oversight.
This week, we’re talking:
After Nixon weaponized federal agencies to spy on his enemies, Congress passed the Privacy Act to make sure it never happened again. The Trump administration has quietly ordered every agency to propose which parts to scrap. 🏛️ 🔓 🇺🇸
Nineteen states have privacy laws this violates. California, Connecticut, Texas. So where are the lawsuits? 🏛️ 📜 👀
The Pentagon argued that Anthropic’s safety guardrails themselves are the national security risk. We’re getting into tricky territory. Hearing March 24. 📄 ⚖️ 💣
OpenAI just killed its “side quests” — Sora, the browser, the hardware device — because Anthropic is eating its lunch on enterprise. 🎯 🔪 🤖
Meta is considering cutting 15,000 people to fund $135 billion in AI spending. The stock went up. 💀 📈 🤖
Three years into the boom, Wall Street can’t decide whether AI will be too disruptive or not disruptive enough. 💸 📉 🫧
AI research conferences are being infected by the very hallucinations those conferences exist to study. 🧪 📄 👻
The Senate voted 51-48 to begin debate on a bill that hands voter roll data to DHS. 🗳️ 📜 🚨
The UK published the first post-consultation AI copyright framework. Only 0.5% of respondents wanted a blanket exception for training. 🇬🇧 📜 🤖
Silicon Valley’s new favorite buzzword is “taste.” The New Yorker calls it taste-washing. 🍷 🧢 🤌
Publicis just dropped The Trade Desk after a failed audit. The stock fell 12%. 📢 🔍 💔
My Take:
A database join is one of those things that feels like magic the first time you see it. Two incomplete tables. One shared key. Hit execute — and suddenly the world snaps into focus. Columns fill in and relationships that were invisible a moment ago come together. I’ve built companies around that moment. It never gets old.
But right now, that same mechanism is being used to dismantle every legal protection standing between the federal government and all of our personal lives.
Remember DOGE? It feels like we’ve lived several lifetimes since January 2025, but a bunch of smart (if barely pubescent) engineers were handed the keys to the federal government and they found something that made no sense to them. Medicaid couldn’t talk to Social Security. The IRS didn’t share data with ICE. The TSA and DHS were running parallel systems that never touched.
It’s one of the most recurring patterns of my professional life. Two disjoint databases. A match key with the power to connect them.
It can feel like malpractice not to join them.
And therein lies the trap.
The people who came before the DOGE bros had the good sense to ask why the join wasn’t run. Big Balls and his crew just assumed nobody smart enough had gotten there first.
They were spectacularly wrong. Those systems are separated on purpose.
The Privacy Act of 1974 wasn’t written by people who didn’t understand data. It was written by people who had just watched Nixon use federal agencies to spy on anti-war protesters, civil rights leaders, and political enemies. Congress looked at that and said: nope, never again. And so they made a decsision: data collected for one purpose stays in that context. No all-seeing Eye of Sauron. No master file. Just a bunch of boring, separate agencies doing their boring, separate jobs.
It’s boring on purpose. Boring by design.
Bad actors can do less damage with boring.
Which is exactly why the Trump administration went after it.
They ordered every agency to propose which privacy rules to scrap. The reports on what got the axe are sitting at OMB. Nobody outside the administration has seen them. This week, the Freedom of the Press Foundation filed a FOIA lawsuit to change that.
We don’t presently have the reports. But we know enough to be gravely concerned.
The CIA is accessing domestic law enforcement databases. Medicaid rolls are being shared with deportation officials. You filed your taxes, like law-abiding immigrants on a hopeful path to citizenship do. That data now talks to deportation systems. You checked in at the airport. Your face became a query.
Every single one of these is a join, and it’s a join in violation of The Privacy Act. Your data, leaving the context in which you provided it.
If you think this only applies to undocumented immigrants, you’re making the same mistake those engineers made. You’re assuming the system stops where you want it to.
Once the database exists, it doesn’t care who it was built for.
You told your doctor something in confidence and now it’s sitting next to your tax returns in a system you didn’t know existed. Your voter registration, your donation history, and your tax returns all living in the same queryable system, waiting for an administration that decides your politics are worth a closer look. (We wrote a whole law about this in 1974 because it already happened once.) The OPM hack in 2015 exposed 22 million federal employees from one agency’s data. Imagine it’s not one agency. It’s everything. All of you, everywhere all at once.
They started with immigrants because that’s where the current administration thought the political cost was lowest. They miscalculated. Minnesotans’ brave response to ICE raids in their home state reflects the conscience of a nation recoiling from cruelty, chaos, and the pointless deaths of two non-immigrant Americans.
Red states. Blue states. Doesn’t matter. They all have privacy laws that say the same thing: you don’t get to reuse my data without telling me. Nineteen state AGs sued over DOGE’s access to Treasury data, but that was about who got in the door. Nobody’s yet challenged the mergers themselves on state privacy grounds.
California, Connecticut, Texas. Every one of these states has a statute that this violates. So where’s that case?
I know, I know. I’m the guy who peddles privacy software warning you to be scared about privacy. But I’ve spent twenty-five years building data systems, and I can tell you: even if you’re fine with all of this in principle, even if you trust this administration or any administration with a master database of every American, can we trust them to keep it accurate? Can we trust them to keep it safe? I’ve worked in the bowels of these systems for decades, and I can tell you the answer: no way in hell.
Right now, your data may already be talking behind your back. You don’t know. I don’t know. And if something breaks, we won’t find out until it breaks on us.
That’s the part they don’t teach you about joins. Running them is easy. Living with what happens next is the part nobody plans for.
My Stack:
The Pentagon Says Anthropic’s Safety Guardrails Are the National Security Risk VIA TechCrunch 📄 ⚖️ 💣
The DOD filed its 40-page rebuttal today and the core argument is genuinely new legal territory. The Pentagon is arguing that Anthropic’s safety guardrails themselves constitute the risk — specifically, that the military can’t trust a vendor who might “attempt to disable its technology or preemptively alter the behavior of its model” if the company feels its “corporate red lines are being crossed” during a warfighting operation. The DOJ’s legal move is clever but contested: it argues Anthropic’s refusal to remove guardrails is “conduct, not protected speech,” so the First Amendment doesn’t apply. FIRE filed a brief calling this wrong, arguing that a company’s design choices about what its AI will and won’t do are expressive. Nearly 150 retired federal and state judges, Microsoft, and retired military chiefs have all filed in support of Anthropic. Hearing: March 24. Stay tuned.
OpenAI Kills the “Side Quests” — Pivots Hard to Coding and Enterprise VIA Wall Street Journal 🎯 🔪 🤖
OpenAI is finalizing a major strategy shakeup. Applications chief Fidji Simo told staff in an all-hands last week: “We cannot miss this moment because we are distracted by side quests.” Translation: Sora, Atlas browser, the hardware device, ChatGPT eCommerce — all the shiny 2025 launches are getting deprioritized. The new focus is coding tools and enterprise productivity, with Sam Altman and chief research officer Mark Chen actively deciding what to cut. The trigger is Anthropic. Simo said explicitly that OpenAI must “nail productivity, particularly productivity on the business front” as competition heats up. Current and former employees told WSJ the “do everything” approach made it hard to even articulate OpenAI’s strategy.
Meta Weighing 20% Workforce Cut — 15,000 Jobs — to Fund $135B in AI Spending VIA CNBC 💀 📈 🤖
Meta is reportedly considering cutting up to 20% of its workforce to offset $135 billion in planned AI capital expenditure for 2026. It already quietly laid off 1,500 from Reality Labs, redirecting resources from metaverse to AI R&D. Zuckerberg says 2026 is a major year for building “personal super intelligence.” Meta calls the reporting speculative, but its stock climbed 3% on the news. The pattern — fire humans, invest in AI, stock goes up — is becoming the playbook. Atlassian did the same thing last week. The market is rewarding companies for replacing people with models.
Is the AI Bubble About to Burst? We’re in trouble either way. VIA Bloomberg 💸 📉 🫧
Three years into the boom, Wall Street can’t decide whether AI will be too disruptive or not disruptive enough. Bloomberg’s deep dive argues the capital pouring into AI infrastructure has become a “vast liability” — spending at unprecedented rates on compute, talent, and models, but revenue isn’t scaling proportionally. The piece lands the same day Micron reports blowout earnings (revenue nearly tripled YoY on AI memory demand) but shares fell on a sell-the-news reaction. The tension is structural: the build-out is massive, applications are proliferating, but the gap between capital deployed and returns captured keeps widening.
The Hallucinating Peer Reviewers VIA GPTZero 🧪 📄 👻
GPTZero scanned 4,841 accepted NeurIPS papers and found 100+ confirmed hallucinated citations across 51 papers — fake authors, nonexistent journals, URLs that lead nowhere, titles that blend real papers into plausible-sounding fictions. Those papers had already beaten a 24.5% acceptance rate. At ICLR 2026: 300 papers under review, 50+ with at least one hallucination, average ratings of 8/10 — meaning many would have been published with fake sources intact. The reviewers — 3 to 5 domain experts per paper — missed nearly all of them. The world’s premier AI research conferences are now being systematically infected by the very AI behavior those conferences exist to study.
Senate Votes to Debate the SAVE America Act VIA Washington Post 🗳️ 📜 🚨
The Senate voted 51-48 Monday to begin debate on the SAVE America Act, Trump’s “number one priority” — a bill requiring documentary proof of citizenship to register and photo ID to vote. The Brennan Center estimates 21 million Americans lack the documents. An amendment would effectively kill mail voting. The bill also hands voter roll data to DHS. It doesn’t have 60 votes to pass, and Senate GOP is split on whether to force a talking filibuster. Sen. Mike Lee publicly suggested ousting Republican colleagues who won’t go along. Schumer called it “a naked attempt to rig our elections.”
UK Government Publishes Its AI Copyright Report — A “Licensing-First” Approach VIA GOV.UK 🇬🇧 📜 🤖
The UK government published its AI copyright report and economic impact assessment — the first national government to do so post-consultation. The approach stops short of a broad text-and-data mining exception. The consultation drew 11,500+ responses — only 3% supported the government’s preferred opt-out approach, and just 0.5% wanted a blanket exception. The House of Lords separately published its own report calling for transparency requirements and a new licensing framework.
Silicon Valley Has Adopted a New Buzzword: “Taste” VIA The New Yorker 🍷 🧢 🤌
A.I. companies need to associate themselves with taste precisely because their tools are not very palatable, much less cool, to anyone outside of Silicon Valley. Many people view A.I. tools as a threat — to their livelihoods, to their futures, to their senses of self. We might call what’s going on now “taste-washing,” an attempt to give anti-humanist technologies a veneer of liberal humanism. The eighteenth-century French philosophers who established a definition of taste considered it an ineffable quality. Voltaire once wrote that “in order to have taste, it is not enough to see and to know what is beautiful in a given work. One must feel beauty and be moved by it.”
Publicis Drops The Trade Desk After Failed Audit VIA Ad Age 📢 🔍 💔
Publicis, the world’s largest ad holding company, said it will no longer recommend The Trade Desk to clients after a third-party audit found the DSP improperly applied fees, auto-enrolled clients into paid tools without authorization, and couldn’t verify that media and data costs were billed at cost. Publicis represents over 10% of TTD’s gross billings. The stock dropped 12% intraday. The broader signal is clear: the buy side is demanding transparency from the programmatic supply chain.










