This is part of my WordPress Explorations series, where I’m exploring new, far-out ideas about WordPress.
Insights, thoughts, and tips on product, design, and development. Pick a topic and start exploring.
This is part of my WordPress Explorations series, where I’m exploring new, far-out ideas about WordPress.
This is part of my WordPress Explorations series where I’m exploring new, far out, ideas about WordPress.
I’ve been thinking about WordPress differently lately. Taking a step back from the accumulated complexity and simply imagining how it could exist.
WordPress has grown, in both capability and in the many different shapes it can take. That evolution enables millions of people to publish online, but it also adds layers of complexity that have built up over time.
So in this new series of posts, I’m exploring what else could be. What if we could rebuild this ship with the knowledge of everything we’ve learned along the way? What would we do differently today?
I want to question every assumption. Challenge requirements carved in stone. Strip concepts down to their core and ask: what’s actually needed? Because that’s often where innovation happens—when you stop accepting constraints as given.
None of these ideas are proposals.
This is exploration for exploration’s sake. Some might work, most won’t. That’s the point.
What if Matt had joined Google instead of starting WordPress?
The internet might have felt a little less like ours.
In his recent post, Matt mentioned “How the internet might have turned out differently if I had taken that job, as my mom wanted me to (because they offered free food).”
Funny line. But also wild to think about. It’s incredible how much of the web traces back to one person deciding to build something open.
I think about that sometimes. The tiny choices that ripple out for decades. Who’s choosing open now? And what will that mean twenty years from here?
I built agents.foo to share the Claude Code subagents I actually reach for every day.
After weeks of building subagents in Claude Code, I’ve settled into a handful that earned their place in my workflow. Not the flashy demos you see everywhere, but mostly boring—but super useful—agents that make me a little faster.
My favorite is the Linear product manager. Perfect for those moments when you discover a bug but don’t want to lose momentum on what you’re already working on. It creates thoughtfully executed issues right off.
The agent knows Linear’s data model inside and out. It explores my codebase to find relevant components, references exact file paths, and includes technical context that actually helps. When I prompt “make an issue that the login button is broken on mobile,” right in Claude Code, it creates a structured issue with proper steps to reproduce, expected behavior, and links to the affected components directly.
It’s like having a technical triage person sitting next to you. One that writes great issues.

What’s surprising to me is that we’ve already distilled agentic programming down to simple markdown files. No really complex frameworks or orchestration layers. Just clear instructions about what you want the agent to know and how you want it to help.
This feels like how AI should work. Instead of wrestling with general-purpose models every time, we create specialized helpers that understand our specific tools, projects, and patterns.
The agents on agents.foo represent what coding with AI actually looks like. Not revolutionary breakthroughs, but reliable helpers that handle the repetitive parts of building software.
I’m sharing them because we’re each still figuring out how AI fits into real workflows. These work for me. Maybe they’ll spark ideas for your own daily drivers. Have you created any subagents lately that you’ve found interesting?
You know what I mean by vibe coding? That approach where you throw prompts at an AI, get code back, and ship it without caring about what’s actually under the hood. It’s the “move fast and ship” mentality taken to an extreme.
That’s not me. I build with Claude Code every day, but I care about what’s being built.
The difference is partnership versus just getting code generated. AI is great for removing friction in development, but only when you guide it properly. I don’t need to understand every technical implementation detail, but I absolutely need to understand how to prompt these systems well and how to tell good output from garbage.
This is coding at a different level of abstraction. Way less debugging, more strategic thinking.
Vibe coding relies on blind trust. You ask for a feature, get some code, and assume it works because it runs. The approach I follow (the same one the best engineers have always used) involves asking the right questions, steering toward better solutions, and reviewing code.
Here’s what I mean: I have Claude Code and Copilot work as complementary tools. I scope ideas and technical details with Claude, then either log them as issues for later or tackle them right away. And when it’s time for a pull request, I have the other AI pair programmer handle the review.
They function like tireless pair programmers and I’m the technical lead kicking off the effort and making the calls.

The key insight is guidance versus automation. Vibe coders stop at “it works,” but I’m concerned with whether the solution is correct and sustainable. Will this scale? Is the architecture sound? Does it handle edge cases?
The details matter because understanding structure and correctness keeps you from shipping nonsense. Maybe that changes as AI-augmented engineering gets better, but today, having judgment about the code is what separates functional software from potential disasters.
Even these new AI “vibe-coding” platforms like Lovable, GitHub Spark, and v0 offer a choice: you can vibe with the no-code interface, or you can peek under the hood.
We’re curious beings, so be curious. Watch how your prompts and choices translate into code, learning along the way.
That’s the approach I advocate. Use these tools, absolutely, but stay engaged and curious about how they work. Ask questions. Understand patterns. Let each project teach you something new and you’ve already won.
The best developers are better at prompting, better product leaders, better strategic thinkers, and better at tracking what changed in their codebase. The good news: those skills transfer whether you’re working with AI or humans.
So yea, I don’t vibe code.
Nobody tells you that AI-augmented coding makes implementation skills abundant and strategic thinking priceless.
Last weekend, I experimented with a simple chatbot for my blog. I got it working in thirty minutes. Even just a few months ago, this was a nights and weekends endeavor, at best.
Well, mostly working. RAG pipeline, embeddings, and the whole thing connected and responding to questions about my posts—but with the occasional hallucination I hadn’t fixed yet.
Something felt different. Where was the part where I spent three hours debugging why the vector database wouldn’t talk to the frontend? Where was the documentation rabbit hole? The Stack Overflow shame spiral?
None of that happened.
It got me thinking about where we are today as engineers compared to a few months ago. A shift at the core of our practice is happening in real time.
I used to think good developers were fast typists who memorized syntax. Now they’re people who know what to build and recognize when it’s built right. The muscle memory that took years to develop? Less relevant every month.
I have a friend who writes everything from scratch and is proud to understand every line. Meanwhile, others ship releases in the time it takes him to set up authentication. In a way, we’re both right. My friend understands his codebase perfectly. I understand my users’ problems perfectly. Different skills for a different game.
I find I’m doing much more creative work now as well. When you’re not burned out from wrestling with dependencies and import statements, you have brain space for the interesting questions. What should this actually do? How should it feel to use? What problem are we solving? Does it matter?
Even yesterday, a newer developer asked if AI would make them obsolete. Wrong question. The right question is: what kind of developer do you want to be? The kind who can implement anything, or the kind who knows what’s worth implementing?
Both matter. But only one is getting scarcer.
I used to kill ideas before they had a real chance.
Every spark of curiosity met the same mental gate: Is this worth my time? What’s the opportunity cost? Can I even do this? The auditor in my head would run the numbers and most ideas would die right there.
Then something shifted. Not because AI models got smarter, but because the marginal cost of curiosity drops close to zero.
Building a prototype used to mean weeks of coding. Now it’s an afternoon conversation with Claude or v0. I explore weirder ideas. And I take ridiculous swings because ideas cost almost nothing.
The math changed. When exploration is cheap, you stop rationing curiosity. When you can afford to explore bad ideas, you stumble onto good ones you’d never have planned. Instead of hoarding best guesses, splurge on exploration.
For myself, this freedom to explore without commitment is quite liberating. What about you?
Most design work today involves an expensive translation layer. A designer creates a mockup, then a developer interprets it. Details get lost and feedback cycles stretch weeks.
The real problem isn’t lost details—it’s false validation. Static mockups look convincing but hide critical flaws. They don’t show how animations feel, how forms behave with real data, or how layouts break on different screens. Teams make decisions based on approximations, then discover real problems after development is finished.
This often leads to cycles of expensive revisions. What looked perfect in Figma may require significant rework when built. Performance constraints force design compromises. Edge cases expose interaction problems invisible in static mockups.
AI tools are changing this equation, now more than ever.
Last week, I went to PressConf 2025 with around 140 other WordPress folks—a far cry from the thousands you’d see at typical conferences, but that’s kind of what made it special.
PressConf brought together leaders in our space, in an environment designed for authenticity.
A leaked internal memo from Shopify’s CEO Tobi is making waves. His message is blunt: AI literacy isn’t optional—it’s a fundamental expectation for every employee.
Change happens fast—yesterday’s futuristic concepts are today’s baseline. AI proficiency is quickly becoming the new coding literacy—if it hasn’t already. Understanding how to effectively prompt, contextualize, prototype, and evaluate AI outputs isn’t just beneficial; it’s required.
It’s intriguing that teams are encouraged to see autonomous AI agents as teammates, rather than defaulting to hiring more people. Scale a team’s impact without proportionally increasing their size—seems like a win to me.
Obviously this natural tension here—between AI’s efficiency and the potential loss of human insight—is worth a deeper think. How would your team leverage autonomous AI teammates alongside human talent? At Automattic, we’re experimenting on this front; a subtle shift, for now, but the concepts are there—and worth doubling down on.
More than ever, being good at your job means being good at learning. The most important skill in the age of AI isn’t coding, prompting, or any technical proficiency; it’s adaptability.
AI won’t replace our creativity, empathy, or taste—but it can amplify them. The real question isn’t whether we’ll adopt AI, but whether we’ll adapt quickly enough.
Will you resist this change, or embrace it?
The best product leaders are comfortable being wrong. They’ve figured out that waiting for certainty costs more than making mistakes. Every day you spend seeking perfect information is a day you fall behind.
Strong opinions accelerate learning. Each one tests a hypothesis about users or the market. Some fail, and each wrong assumption teaches you something about your product or the people using it.
Most breakthrough products started as ideas that seemed misguided. Early versions were rough, value propositions unproven. But the teams had focus and conviction when data was scarce and feedback was mixed. Waiting for all the answers is just fear dressed up as caution.
Exceptional product leaders are more decisive than accurate. They push into territory where certainty doesn’t exist. Each wrong turn narrows the path, which is exactly what you want.
Commit fully to your best hypothesis. Tentative implementations produce ambiguous results. When you build something with conviction, reality responds with clarity. That’s how you develop product intuition—confident action, honest assessment, repeat.
Being right feels nice. Being confidently wrong feels like progress.
So risk the stumble. It’s the only way forward.
After a 25-hour journey to Manila, Philippines for WordCamp Asia 2025, I quickly found myself immersed among passionate WordPress enthusiasts. As Asia’s flagship WordCamp, the conference drew over 1,700 attendees from 70+ countries.
Very cool.
Matías set the tone with a keynote reflecting WordPress’s next chapter—balancing innovation with its founding principles and exploring AI’s role in its evolution. And I particularly enjoyed Jamie’s Speed Build Challenge brining these ideas to life, as Jessica assembled a creative website homepage with blocks, while Nick showed us how to build a bespoke hero block with Cursor. There were quite a few good talks—watch them.
I cloned my voice with PlayAI to create an ai voice support agent for my WordPress theme, Kanso. What started as a simple way to add audio to my blog posts became something much more interesting, and engaging.
Making it work was surprisingly straightforward—I needed just 23 seconds of my voice to create a convincing clone. Wild.
With the voice clone ready, I compiled questions from the launch post, support emails, and Slack DMs into a structured JSON file—each with its corresponding answer. Along with these Q&A pairs, I added guardrails to reduce potential mishaps, like instructing the agent not to reference the WordPress Customizer since Kanso is a block theme.
I then built a WordPress block using Cursor and the PlayAI Web SDK, giving me more control over how people engage with the agent than the standard embed could offer.
The agent works well enough, but it tends to drift off topic sometimes. Eventually I’d like to explore adding actions, like providing visual aids alongside conversations.
Fine-tuning the agent was surreal—hearing an uncanny version of my voice explaining my ideas back to me, with admittedly more patience than myself. Strange and intriguing at the same time.
Give it a try—ask it something about block themes, especially if you’re using Kanso. Maybe this is what support will look like: your voice, your knowledge, but available any time.
I’m always looking for ways to make WordPress a better experience. It’s powerful, flexible, and capable of much—but the real test of any improvement is how well it helps real people accomplish real goals.
When my friend Chad asked for help with his website, it gave me a chance to see WordPress through his eyes. He runs More Music Foundation, a local nonprofit that makes music education available to underprivileged kids. His website isn’t just a digital presence—it’s how he connects with donors, shares success stories, and builds community support.
Like many, he had run the full gamut of website solutions. He started with a proprietary website builder that got the job done but left him wanting more. Then he hired someone who took the “everything-you-could-ever-want” page builder route—powerful, but bloated and difficult to manage.
It was in rough shape.
So we moved his site to WordPress.com, gave it a fresh design, and embraced the block editor to give him the tools he needed to best tell his organization’s story.