The new WP AI Client in WordPress 7.0 is provider-agnostic. This means you get to choose what AI providers you want to use, while plugins just add functionality via abilities.
Insights, thoughts, and tips on product, design, and development. Pick a topic and start exploring.
The new WP AI Client in WordPress 7.0 is provider-agnostic. This means you get to choose what AI providers you want to use, while plugins just add functionality via abilities.
The upcoming release of WordPress 7.0 ships with the new WP AI Client. You will be able to can call any AI model (Anthropic, Google, OpenAI, others) through a single API, in PHP directly and from JavaScript using the Abilities API.
The fastest-growing users of our products are agents. And agents don’t need interfaces.
Agents do not need buttons, visual hierarchy, hover states, or spinners. They need APIs, structured data, and predictable endpoints (and to know about them).
What matters underneath is the primitive: block schemas, data models, structured content. The formats they produce (markdown, HTML, JSON) are addressable, diffable, and writable by both humans and machines. Most interface chrome is just a convenience layer on top.
You can already see this shift in code editing.
A year ago, writing software meant living inside a code editor, manually creating and editing files. Today, tools like Codex, Claude Code, and Telex have moved much of that primary workflow into a chat interface.
The code editor still exists, but its role has changed: you’re often reviewing, fine-tuning, and steering while the editor becomes secondary.
The same shift is happening in website building: an agent does not need a block inserter or drag-and-drop chrome. It needs a clear schema for what a page can be and a stable way to write to it (cue the block model for WordPress).
The visual editor becomes the place where a human reviews what the agent built and fine-tunes from there. Which means we’re now designing for two audiences.
The primary audience is increasingly the agent: give it clean APIs, predictable structures, and fast execution paths.
The secondary audience is the human: they need controls to edit, review, refine, and redirect, but most of all they need confidence. Did the agent do something that supports their goal? Did it meet their standards? How do we communicate what the agent did and why? How do we help humans and agents stay aligned? And when they’re not aligned, how do we make it easy to redirect?
We’ve always treated the human interface as the product, the shape of buttons, the depth of shadows, the flow from connecting accounts to purchasing to completing the job. The API was often an afterthought for integrations or technical requirements.
Human interfaces are not going away, but they are becoming less central. I’ve spent my career building interfaces, but now the most important work I do is what happens beneath them.
API is the new UI.
I woke up to five pull requests this morning. Passing tests, clean commits, generated PR descriptions.
I didn’t write a single line of code (but I did review).

This is what product management looks like now. I describe the problem. We shape it into a PRD. Then the agent translates that into structured issues, sequences the work, and starts building.
My job is less “writing specs” and more thinking clearly, making good calls, and steering the system while it moves.
I’ve been building my own take on the Ralph Wiggum Loop: a system of Claude Code skills that plans, structures, and executes features mostly autonomously.
I’ve found myself choosing audio more often, like in last week’s experiment, interviews.now. It’s nice, especially when I’m walking, driving, or just stepping away from a screen.
I wanted to explore adding audio to my blog in a way that stays simple and doesn’t add any friction to how I publish—at all.
So yea, you can listen to my posts, read in my voice.
I built an ai voice agent that interviews people about WordPress. Three minutes, their honest take, with structured insights delivered on the other side.
The way I see it, conversations are variables—context, intent, memory, tone. Set them right and agents handle structured research while you focus on judgment calls.
Humans are irreplaceable for empathy, judgment, shared experience. But for conversations that are structured and repeatable? Agents are in.
This is part of my WordPress Explorations series, where I’m exploring new, far-out ideas about WordPress.
This is part of my WordPress Explorations series where I’m exploring new, far out, ideas about WordPress.
I’ve been thinking about WordPress differently lately. Taking a step back from the accumulated complexity and simply imagining how it could exist.
WordPress has grown, in both capability and in the many different shapes it can take. That evolution enables millions of people to publish online, but it also adds layers of complexity that have built up over time.
So in this new series of posts, I’m exploring what else could be. What if we could rebuild this ship with the knowledge of everything we’ve learned along the way? What would we do differently today?
I want to question every assumption. Challenge requirements carved in stone. Strip concepts down to their core and ask: what’s actually needed? Because that’s often where innovation happens—when you stop accepting constraints as given.
None of these ideas are proposals.
This is exploration for exploration’s sake. Some might work, most won’t. That’s the point.
What if Matt had joined Google instead of starting WordPress?
The internet might have felt a little less like ours.
In his recent post, Matt mentioned “How the internet might have turned out differently if I had taken that job, as my mom wanted me to (because they offered free food).”
Funny line. But also wild to think about. It’s incredible how much of the web traces back to one person deciding to build something open.
I think about that sometimes. The tiny choices that ripple out for decades. Who’s choosing open now? And what will that mean twenty years from here?
I built agents.foo to share the Claude Code subagents I actually reach for every day.
After weeks of building subagents in Claude Code, I’ve settled into a handful that earned their place in my workflow. Not the flashy demos you see everywhere, but mostly boring—but super useful—agents that make me a little faster.
My favorite is the Linear product manager. Perfect for those moments when you discover a bug but don’t want to lose momentum on what you’re already working on. It creates thoughtfully executed issues right off.
The agent knows Linear’s data model inside and out. It explores my codebase to find relevant components, references exact file paths, and includes technical context that actually helps. When I prompt “make an issue that the login button is broken on mobile,” right in Claude Code, it creates a structured issue with proper steps to reproduce, expected behavior, and links to the affected components directly.
It’s like having a technical triage person sitting next to you. One that writes great issues.

What’s surprising to me is that we’ve already distilled agentic programming down to simple markdown files. No really complex frameworks or orchestration layers. Just clear instructions about what you want the agent to know and how you want it to help.
This feels like how AI should work. Instead of wrestling with general-purpose models every time, we create specialized helpers that understand our specific tools, projects, and patterns.
The agents on agents.foo represent what coding with AI actually looks like. Not revolutionary breakthroughs, but reliable helpers that handle the repetitive parts of building software.
I’m sharing them because we’re each still figuring out how AI fits into real workflows. These work for me. Maybe they’ll spark ideas for your own daily drivers. Have you created any subagents lately that you’ve found interesting?
You know what I mean by vibe coding? That approach where you throw prompts at an AI, get code back, and ship it without caring about what’s actually under the hood. It’s the “move fast and ship” mentality taken to an extreme.
That’s not me. I build with Claude Code every day, but I care about what’s being built.
The difference is partnership versus just getting code generated. AI is great for removing friction in development, but only when you guide it properly. I don’t need to understand every technical implementation detail, but I absolutely need to understand how to prompt these systems well and how to tell good output from garbage.
This is coding at a different level of abstraction. Way less debugging, more strategic thinking.
Vibe coding relies on blind trust. You ask for a feature, get some code, and assume it works because it runs. The approach I follow (the same one the best engineers have always used) involves asking the right questions, steering toward better solutions, and reviewing code.
Here’s what I mean: I have Claude Code and Copilot work as complementary tools. I scope ideas and technical details with Claude, then either log them as issues for later or tackle them right away. And when it’s time for a pull request, I have the other AI pair programmer handle the review.
They function like tireless pair programmers and I’m the technical lead kicking off the effort and making the calls.

The key insight is guidance versus automation. Vibe coders stop at “it works,” but I’m concerned with whether the solution is correct and sustainable. Will this scale? Is the architecture sound? Does it handle edge cases?
The details matter because understanding structure and correctness keeps you from shipping nonsense. Maybe that changes as AI-augmented engineering gets better, but today, having judgment about the code is what separates functional software from potential disasters.
Even these new AI “vibe-coding” platforms like Lovable, GitHub Spark, and v0 offer a choice: you can vibe with the no-code interface, or you can peek under the hood.
We’re curious beings, so be curious. Watch how your prompts and choices translate into code, learning along the way.
That’s the approach I advocate. Use these tools, absolutely, but stay engaged and curious about how they work. Ask questions. Understand patterns. Let each project teach you something new and you’ve already won.
The best developers are better at prompting, better product leaders, better strategic thinkers, and better at tracking what changed in their codebase. The good news: those skills transfer whether you’re working with AI or humans.
So yea, I don’t vibe code.
Nobody tells you that AI-augmented coding makes implementation skills abundant and strategic thinking priceless.
Last weekend, I experimented with a simple chatbot for my blog. I got it working in thirty minutes. Even just a few months ago, this was a nights and weekends endeavor, at best.
Well, mostly working. RAG pipeline, embeddings, and the whole thing connected and responding to questions about my posts—but with the occasional hallucination I hadn’t fixed yet.
Something felt different. Where was the part where I spent three hours debugging why the vector database wouldn’t talk to the frontend? Where was the documentation rabbit hole? The Stack Overflow shame spiral?
None of that happened.
It got me thinking about where we are today as engineers compared to a few months ago. A shift at the core of our practice is happening in real time.
I used to think good developers were fast typists who memorized syntax. Now they’re people who know what to build and recognize when it’s built right. The muscle memory that took years to develop? Less relevant every month.
I have a friend who writes everything from scratch and is proud to understand every line. Meanwhile, others ship releases in the time it takes him to set up authentication. In a way, we’re both right. My friend understands his codebase perfectly. I understand my users’ problems perfectly. Different skills for a different game.
I find I’m doing much more creative work now as well. When you’re not burned out from wrestling with dependencies and import statements, you have brain space for the interesting questions. What should this actually do? How should it feel to use? What problem are we solving? Does it matter?
Even yesterday, a newer developer asked if AI would make them obsolete. Wrong question. The right question is: what kind of developer do you want to be? The kind who can implement anything, or the kind who knows what’s worth implementing?
Both matter. But only one is getting scarcer.
I used to kill ideas before they had a real chance.
Every spark of curiosity met the same mental gate: Is this worth my time? What’s the opportunity cost? Can I even do this? The auditor in my head would run the numbers and most ideas would die right there.
Then something shifted. Not because AI models got smarter, but because the marginal cost of curiosity drops close to zero.
Building a prototype used to mean weeks of coding. Now it’s an afternoon conversation with Claude or v0. I explore weirder ideas. And I take ridiculous swings because ideas cost almost nothing.
The math changed. When exploration is cheap, you stop rationing curiosity. When you can afford to explore bad ideas, you stumble onto good ones you’d never have planned. Instead of hoarding best guesses, splurge on exploration.
For myself, this freedom to explore without commitment is quite liberating. What about you?
Most design work today involves an expensive translation layer. A designer creates a mockup, then a developer interprets it. Details get lost and feedback cycles stretch weeks.
The real problem isn’t lost details—it’s false validation. Static mockups look convincing but hide critical flaws. They don’t show how animations feel, how forms behave with real data, or how layouts break on different screens. Teams make decisions based on approximations, then discover real problems after development is finished.
This often leads to cycles of expensive revisions. What looked perfect in Figma may require significant rework when built. Performance constraints force design compromises. Edge cases expose interaction problems invisible in static mockups.
AI tools are changing this equation, now more than ever.