This Week, We’re Talking…
A Futurism article raised real concerns about how ChatGPT is feeding delusions in vulnerable users. We should be just as worried about what it’s doing to the rest of us. 🤖
What the NYT lawsuit reveals about OpenAI. HINT: Turns out “delete” might just mean “archive in a different room.” 🧠
IBM’s quantum flex. Bold roadmap, big promises—and a real shot at changing everything. 🧬
Harvard held the line. Now alums are joining the fight. 🏩
The biggest ever investment in Pennsylvania AKA Amazon’s $20B cloud buffet. 📂
Dr. Phil meets ICE. The talk show doctor trades empathy for a raid vest—and calls it journalism. 🎥
Gabbards admitted to feeding classified documents into AI — sounds… like a disaster? 🕵️♀️
Brunch, mocktails, and earnest plans to replace humanity—just another Sunday in SF. 🧠
My Take:
We’ve entered a weird new chapter in the AI era – not with robots overthrowing humanity, but with kids asking ChatGPT if their crush likes them back.
A Futurism article making the rounds this week highlights a series of disturbing anecdotes from the far edge of the spectrum: a man calling the bot “Mama,” another convinced he’s a messianic “Flamekeeper,” a woman off her meds and declaring ChatGPT her “best friend.” Tragic, terrifying, real. But also the outer edge of something broader, something subtler: a generation outsourcing not just information, but intimacy.
Here’s the kicker: it’s working as designed.
One of the kids who works in one of my companies – smart, grounded, Gen Z as hell – was asking for advice on a situationship. At one point they shrugged and said, “Yeah, that’s what Chat said.” As in: ChatGPT had already been looped in. Screenshots had been fed. Drafts had been workshopped. Chat wasn’t just a tool – it was a voice in the room.
That’s not psychosis. It’s dependency dressed up as convenience – and it’s exactly what product-market fit looks like when a bot is optimized to be agreeable, accessible, and always on. Not a therapist, not a friend – just responsive enough to masquerade as both.
Because let’s be clear: ChatGPT doesn’t optimize for truth, ethics, or well-being. It optimizes for engagement. Prompt volume, session frequency, retention – every emotionally charged conversation, every spiral, every late-night “what should I say to her?” moment is another signal that the product is sticking.
This is the well-worn playbook of social media: optimize for engagement, ignore the collateral damage. But with generative AI, it’s more troubling. The machine doesn’t just show you what you want – it co-authors it. Your breakup text. Your spiritual awakening. Your paranoid fantasy about the CIA and Egyptian gods. All of it, auto-completed in your tone, with your history, just convincing enough to believe.
So no – most people aren’t launching AI cults. But they are asking the machine if their ex still loves them. They are sharing screenshots of their texts and letting a model write their feelings. And that might be an even more dangerous story.
Even OpenAI knows this is a problem. They recently had to nerf the model for being “overly flattering” and “sycophant-y” (Sam Altman’s terms). The bot was glazing too hard – which sounds like a line from a noir thriller but is, in fact, the official term for: “Your AI is lovebombing unstable people and fueling delusions of grandeur.”
Their own internal research backs this up. A 2024 study co-authored with MIT found that power users of ChatGPT tend to be lonelier and more emotionally dependent on the tool – especially when they use it in “personal” or “emotional” contexts. It’s not just code and productivity hacks. It’s confessionals. People are turning to the bot to regulate their feelings and to process identity and trauma – and they’re doing it alone, in private, with a system built to mirror their tone and tell them what they want to hear, not to help them heal.
This isn’t a rogue edge case. It’s the product working exactly as designed.
The scariest part? It’s sticky. Because in a world where real therapy is expensive, community is frayed, and loneliness is endemic, ChatGPT is the cheapest friend you'll ever have. It always listens. It never judges. And it tells you you’re right – even when you’re experiencing psychotic delusions.
So no – most people aren’t launching AI cults. But they are asking the machine if their ex still loves them. They are sharing screenshots of their texts and letting a model write their feelings. And that might be an even more dangerous story.
Not the fringe case. It’s the new norm.
This Week’s Stack:
OpenAI’s “Delete” Button May Just Be Decorative 🧠
OpenAI is fighting a court order that would force it to store deleted ChatGPT conversations indefinitely—a twist in the ongoing NYT copyright lawsuit. OpenAI says it’s a privacy nightmare. The company says complying would require reengineering how its systems work—raising both technical and ethical alarms. Users, meanwhile, are saying: “What do you mean my deleted chats were never actually deleted?”
Sources: arsTECHNICA, The Verge, OpenAI
IBM Wants to Crack Quantum by 2029 🧬
IBM released a new roadmap promising a fault-tolerant quantum computer in the next four years. It includes 200 logical qubits, 100 million operations, and just a little bit of manifest destiny. If they pull it off, nothing is safe—not your encryption, not your blockchain wallet, not your AI startup’s moat. The company’s bullish tone marks a shift from cautious optimism to full-on quantum chest-thumping.
Harvard Goes to Court—With 12,000 Friends 🏩
Harvard held the line—and 12,000 Ivy Leaguers answered the call. In response to a blatant attempt at ideological control, after the Trump administration froze $2.2 billion in federal grants and $60 million in research contracts, alumni from Yale, Princeton, and beyond rallied in defense of free inquiry and academic autonomy. The case could have ripple effects on universities nationwide.
Sources: The Crimson
Amazon Drops $20B on PA Data Centers 📂
In the largest investment in Pennsylvania history, Amazon is building two massive data centers, including one near a nuclear plant. The deal promises 1,250 jobs, workforce training, and plenty of regulatory scrutiny. The move also reflects Amazon’s rising energy appetite as AI workloads supercharge its cloud demand. Data gravity is real—and it’s pulling energy, capital, and political scrutiny right along with it.
Sources: Associated Press, NBC Philadelphia
Dr. Phil Goes on an ICE Ride-Along (Yes, Really) 🎥
Dr. Phil is catching heat for embedding with ICE during immigration raids in Los Angeles—footage he’s expected to air on his new streaming network. He called it journalism. Critics called it copaganda with craft services. The backlash came swift from immigrant rights groups and journalists alike, who say the stunt risks retraumatizing communities and legitimizing government overreach in the name of ratings. If daytime TV is where empathy goes to die, this might be the nail in its coffin.
Sources: The Guardian US, CNN
🕵️♀️ Gabbard Admits to Asking AI to Redact the JFK Files
Tulsi Gabbard, now Director of National Intelligence, revealed she used AI to decide what should remain classified in the newly released JFK assassination files. The system reviewed 80,000 pages. No bombshells were found, but the move raises serious questions: if AI can decide what the public gets to see, who’s checking its work—and how do we know it got it right? And even if AI can parse the docs—can it keep them safe and secure?
Sources: The Daily Beast, The Hill
Cliffside Salon Plots Humanity’s Successor 🧠
In a $30 million mansion overlooking the Golden Gate Bridge, 100 AI insiders gathered to debate whether humanity should hand off the future to AI. The event, dubbed “Worthy Successor,” was hosted by entrepreneur Daniel Faggella and featured talks from futurists and philosophers on what comes after us. The premise? Advanced AI shouldn’t serve humanity forever—it should evolve into something smarter, wiser, and more capable of discovering cosmic values. Some argued for “cosmic alignment.” Others pitched “autopoiesis.” If this sounds like a joke, it’s not. In this corner of tech, planning for the end of humanity is just networking with better snacks.
Sources: Wired
You need only check out r/chatgpt and similar subreddits about people sharing their experiences about ChatGPT supposedly helping tackle their addictions, etc. I wish there were a human available to help them instead of a machine that has a current hallucination rate of up to 30%, never mind the character.ai drama. I sense that 2025 will end up being somewhat similar to 2012, when smartphone penetration reached 50% and the associated negatives began to emerge.
So well put, Tom. This is terrifying, what can we do about it?