Cool Demos Don’t Pay the Bills. Iteration Does.
Magic is easy, maintenance is where you earn it.
This week, we’re talking:
Vibe Coding : First Date :: Enterprise Software : Marriage
How to handle non-determinism for enterprise
The dangers of product and sales drift
OpenAI being ordered to hand over 20 Million ChatGPT logs and the trouble with “anonymization”
Why immigration crackdowns are playing out as badly as ::checks notes:: we all knew they would
Pour one out for Claude, Cal Academy of Sciences beloved Albino Alligator (1995-2025)
My Take:
Here’s the thing about AI right now: everyone’s shipping press releases, not software.
Yes, a third of software leaders say they’re using AI to generate apps, and 69% claim it’s boosted productivity. But once you get past the LinkedIn glow, most of that work is happening in silos—tiny pilots scattered across the org chart, none of them talking to each other. Fragmented workflows. Optional governance. And a quiet assumption that agentic AI can be built the same way we built software ten years ago.
But Agentic AI isn’t a naming convention—it’s an architectural shift. Without rethinking how systems handle data, security, and iteration, the agents fail on impact.
Enter OutSystems. They’re one of the few platforms attempting a unified, enterprise-grade low-code + AI + “agent workbench” stack, letting teams build apps and agents, manage their lifecycle, enforce DevSecOps, and plug into real data without duct-taping six tools together. It’s a credible starting point for anyone trying to turn pilots into production.
I sat down with Woodson Martin, CEO of OutSystems, to talk about what works, where the cracks still are, and how to build AI-augmented software that doesn’t collapse under its own ambition.
Tom Chavez: We first crossed paths in the Salesforce/Krux era—Cannes Lions, ExactTarget, all that. You were one of the leaders helping Salesforce step into the B2C data world. Then a few years later, you were down at the border doing hands-on work with Mobile Pathways, and my sister and I were lucky enough to support that. You’ve always toggled between hardcore enterprise tech and real-world civic engagement. Not many people manage that duality.
Woodson Martin: I appreciate that. In both business and philanthropy, I believe in getting close to the real problem—seeing it firsthand, talking to people affected by it, understanding what’s actually happening. That’s what drew me to OutSystems. Customers bring us problems they can’t solve with off-the-shelf software. Those problems force you to learn fast and stay humble. I love that.
Tom Chavez: Right now, everyone is experimenting with agents—and a decent chunk of it feels like AI theater. Lots of pilots, few production systems. From your vantage point, where do teams actually get stuck making that jump from “this is cool” to “this is real”?
Vibe coding is a first date. Fun, exciting, full of energy. But enterprise software? That’s marriage. Architecture, clarity, structure, all the unsexy stuff that keeps the relationship alive.
Woodson Martin: Iteration is the make-or-break factor. It takes a lot of trial, error, and recalibration to figure out where agents actually add value, where you still need humans, and where the handoff between them belongs. Most companies aren’t set up to iterate at the necessary speed because their stack is spread across half a dozen disconnected tools—data here, agents there, UX over somewhere else.
When you can manage the lifecycle—data, agents, prompts, and the application layer—together, you discover the valuable use cases much faster. That’s where we see OutSystems making a difference. The platform gives teams a way to experiment coherently instead of duct-taping experiments across tools that don’t talk to each other.
Tom Chavez: Let’s talk about the non-determinism problem, because that’s where enterprise leaders start sweating. Consumers are flummoxed and frequently surprised when an LLM gives a different answer to the exact same question they asked the day before. In a regulated industry? That’s chaos. How do you design for that level of unpredictability without neutering the agent?
Woodson Martin: By being explicit about where you want creativity and where you want control.
In heavily regulated sectors, you can’t have an AI improvising the same way it does in a consumer app. So we use what we call agentic workflows. You define the moments where the agent should reason, explore, and learn AND the moments where it must choose from deterministic, pre-approved actions.
Take mortgage origination. There’s a ton of document checking, policy validation, data enrichment—work that’s routine and perfect for agents. But no customer we’ve worked with wants an AI managing the applicant relationship. So they automate the back office and keep humans front and center on decisions that require judgment, empathy, or nuance.
Tom Chavez: The human piece is so underrated. Everyone’s hyped about autonomy, but the real value right now is augmentation. I use AI to prepare for customer conversations but not to replace them. That’s the good stuff.
Which brings me to vibe coding. It’s intoxicating. You describe what you want, the AI spits out a prototype, and you feel like a wizard. But for enterprises, wizardry is rarely the thing that holds up in production. How do you see vibe coding playing out?
Woodson Martin: Vibe coding is an amazing accelerant for ideation. You describe an app, an agent, a workflow—and suddenly you’re looking at something real. That’s transformational for the front end of the development lifecycle.
But beneath the surface? You often get a pile of spaghetti that reflects 17 inconsistent conversations. The prototypes are great; the production path gets messy.
A friend of mine built a neighborhood security app in a weekend using AI tools. He was proud of it—and then immediately realized he’d created a maintenance nightmare. As soon as neighbors wanted changes, the whole thing wobbled.
That’s why vibe coding actually increases the need for hardened platforms. At OutSystems, our AI assistant Mentor lets you design through conversation, but the executable code is generated by a deterministic engine we’ve refined over 24 years. Same input = same output. That’s what enterprises need: magic on top, reliability underneath.
Tom Chavez: Yeah—vibe coding is a first date. Fun, exciting, full of energy. But enterprise software? That’s marriage. Architecture, clarity, structure, all the unsexy stuff that keeps the relationship alive.
You mentioned iteration earlier. At OutSystems, you don’t just celebrate iteration—you’ve made it a product feature. When people talk about AI-augmented development, they love the demos, but the truth is: value lives in the KPIs. So how do you measure whether iteration is actually delivering something meaningful?
Woodson Martin: You measure productivity. But you have to define the target before you start.
If you don’t know what success looks like, you’re just iterating randomly. Once goals are clear, we measure productivity at two layers: the developers building the systems, and the workers using them.
Take Oceaneering. Their technicians fly out to oil rigs to perform safety inspections. Every minute matters—literally. So we ask:
How much more can a technician complete per visit?
How much data capture can the camera and app handle automatically?
How fast can the dev team push improvements based on field feedback?
Because OutSystems tracks the lifecycle end-to-end, we can compare productivity across releases. It gives customers a continuous optimization loop instead of one-off wins.
Tom Chavez: You’ve led product, sales, and now you’re running a company building AI applications at scale. Looking back, what’s the classic go-to-market mistake you’d erase if you had a do-over?
Woodson Martin: When product and sales drift apart.
Sales is living quarter-to-quarter. Product is running a marathon. If leadership isn’t vigilant, the messaging in the field diverges from the actual product roadmap. It’s understandable, but it’s dangerous.
The antidote is alignment—heavy investment in enablement, tight communication loops, teams spending time in each other’s world. Everyone needs to be telling the same story and aiming at the same outcomes.
Tom Chavez: Preach. Any daylight between product and sales, and expectations get mis-set on all sides.
Let’s land this with a lightning round. Ready?
Woodson Martin: Let’s do it.
Tom Chavez: Advice you wish someone had given you early in the AI journey?
Woodson Martin: Don’t believe the hype. Especially not on LinkedIn. Spend more time with customers and less with vendors promising the moon.
Tom Chavez: Favorite team-building ritual?
Woodson Martin: A good old-fashioned offsite. Fire pit, shared meals, real conversation. And for planning? I still swear by Salesforce’s V2MOM framework—vision, values, methods, obstacles, measures. It aligns teams like nothing else.
Tom Chavez: Where do you go when you need to think?
Woodson Martin: Fallen Leaf Lake, just south of Tahoe. Our cabin backs onto the Desolation Wilderness. No cars, no tools, just silence. It resets my brain.
Tom Chavez: And if your younger entrepreneurial self were here, what hard truth would you hand him?
Woodson Martin: Invest in your mentors. Keep those relationships alive. They matter more—and sooner—than you think.
Tom Chavez: Perfect place to end. Woodson, this was a blast. Thanks for joining me.
Woodson Martin: Thank you, Tom.
My Stack:
OpenAI loses fight to keep ChatGPT logs secret in copyright case
OpenAI was just ordered to hand over 20 million “anonymized” ChatGPT logs to the New York Times, and the judge basically said: if your privacy tech is as good as you claim, there shouldn’t be a problem. Silicon Valley loves to swear anonymization is bulletproof—right up until a court asks them to use it. Anyone else having flashbacks to AOL in 2005, when “anonymous” search logs let reporters identify individual users in a weekend? Buckle up, I guess.




Fantastic framing with the first date vs marriage analogy. The vibe coding hype really does feel like everyone's addicted to the demo dopamine hit without wanting to deal with the unsexy reality of long-term maintenace. That spaghetti code trap from 17 inconsistent conversations is so spot-on it hurts. Building an AI agent that can iterate gracefully in production is way harder than making something flashy that wows an audience for 15 minutes.