I’ve found myself choosing audio more often, like in last week’s experiment, interviews.now. It’s nice, especially when I’m walking, driving, or just stepping away from a screen.
I wanted to explore adding audio to my blog in a way that stays simple and doesn’t add any friction to how I publish—at all.
I built an ai voice agent that interviews people about WordPress. Three minutes, their honest take, with structured insights delivered on the other side.
The way I see it, conversations are variables—context, intent, memory, tone. Set them right and agents handle structured research while you focus on judgment calls.
Humans are irreplaceable for empathy, judgment, shared experience. But for conversations that are structured and repeatable? Agents are in.
I built agents.foo to share the Claude Code subagents I actually reach for every day.
After weeks of building subagents in Claude Code, I’ve settled into a handful that earned their place in my workflow. Not the flashy demos you see everywhere, but mostly boring—but super useful—agents that make me a little faster.
My favorite is the Linear product manager. Perfect for those moments when you discover a bug but don’t want to lose momentum on what you’re already working on. It creates thoughtfully executed issues right off.
The agent knows Linear’s data model inside and out. It explores my codebase to find relevant components, references exact file paths, and includes technical context that actually helps. When I prompt “make an issue that the login button is broken on mobile,” right in Claude Code, it creates a structured issue with proper steps to reproduce, expected behavior, and links to the affected components directly.
It’s like having a technical triage person sitting next to you. One that writes great issues.
What’s surprising to me is that we’ve already distilled agentic programming down to simple markdown files. No really complex frameworks or orchestration layers. Just clear instructions about what you want the agent to know and how you want it to help.
This feels like how AI should work. Instead of wrestling with general-purpose models every time, we create specialized helpers that understand our specific tools, projects, and patterns.
The agents on agents.foo represent what coding with AI actually looks like. Not revolutionary breakthroughs, but reliable helpers that handle the repetitive parts of building software.
I’m sharing them because we’re each still figuring out how AI fits into real workflows. These work for me. Maybe they’ll spark ideas for your own daily drivers. Have you created any subagents lately that you’ve found interesting?
You know what I mean by vibe coding? That approach where you throw prompts at an AI, get code back, and ship it without caring about what’s actually under the hood. It’s the “move fast and ship” mentality taken to an extreme.
That’s not me. I build with Claude Code every day, but I care about what’s being built.
The difference is partnership versus just getting code generated. AI is great for removing friction in development, but only when you guide it properly. I don’t need to understand every technical implementation detail, but I absolutely need to understand how to prompt these systems well and how to tell good output from garbage.
This is coding at a different level of abstraction. Way less debugging, more strategic thinking.
Vibe coding relies on blind trust. You ask for a feature, get some code, and assume it works because it runs. The approach I follow (the same one the best engineers have always used) involves asking the right questions, steering toward better solutions, and reviewing code.
Here’s what I mean: I have Claude Code and Copilot work as complementary tools. I scope ideas and technical details with Claude, then either log them as issues for later or tackle them right away. And when it’s time for a pull request, I have the other AI pair programmer handle the review.
They function like tireless pair programmers and I’m the technical lead kicking off the effort and making the calls.
The key insight is guidance versus automation. Vibe coders stop at “it works,” but I’m concerned with whether the solution is correct and sustainable. Will this scale? Is the architecture sound? Does it handle edge cases?
The details matter because understanding structure and correctness keeps you from shipping nonsense. Maybe that changes as AI-augmented engineering gets better, but today, having judgment about the code is what separates functional software from potential disasters.
Even these new AI “vibe-coding” platforms like Lovable, GitHub Spark, and v0 offer a choice: you can vibe with the no-code interface, or you can peek under the hood.
We’re curious beings, so be curious. Watch how your prompts and choices translate into code, learning along the way.
That’s the approach I advocate. Use these tools, absolutely, but stay engaged and curious about how they work. Ask questions. Understand patterns. Let each project teach you something new and you’ve already won.
The best developers are better at prompting, better product leaders, better strategic thinkers, and better at tracking what changed in their codebase. The good news: those skills transfer whether you’re working with AI or humans.
Nobody tells you that AI-augmented coding makes implementation skills abundant and strategic thinking priceless.
Last weekend, I experimented with a simple chatbot for my blog. I got it working in thirty minutes. Even just a few months ago, this was a nights and weekends endeavor, at best.
Well, mostly working. RAG pipeline, embeddings, and the whole thing connected and responding to questions about my posts—but with the occasional hallucination I hadn’t fixed yet.
Something felt different. Where was the part where I spent three hours debugging why the vector database wouldn’t talk to the frontend? Where was the documentation rabbit hole? The Stack Overflow shame spiral?
None of that happened.
It got me thinking about where we are today as engineers compared to a few months ago. A shift at the core of our practice is happening in real time.
I used to think good developers were fast typists who memorized syntax. Now they’re people who know what to build and recognize when it’s built right. The muscle memory that took years to develop? Less relevant every month.
I have a friend who writes everything from scratch and is proud to understand every line. Meanwhile, others ship releases in the time it takes him to set up authentication. In a way, we’re both right. My friend understands his codebase perfectly. I understand my users’ problems perfectly. Different skills for a different game.
I find I’m doing much more creative work now as well. When you’re not burned out from wrestling with dependencies and import statements, you have brain space for the interesting questions. What should this actually do? How should it feel to use? What problem are we solving? Does it matter?
Even yesterday, a newer developer asked if AI would make them obsolete. Wrong question. The right question is: what kind of developer do you want to be? The kind who can implement anything, or the kind who knows what’s worth implementing?
I used to kill ideas before they had a real chance.
Every spark of curiosity met the same mental gate: Is this worth my time? What’s the opportunity cost? Can I even do this? The auditor in my head would run the numbers and most ideas would die right there.
Then something shifted. Not because AI models got smarter, but because the marginal cost of curiosity drops close to zero.
Building a prototype used to mean weeks of coding. Now it’s an afternoon conversation with Claude or v0. I explore weirder ideas. And I take ridiculous swings because ideas cost almost nothing.
The math changed. When exploration is cheap, you stop rationing curiosity. When you can afford to explore bad ideas, you stumble onto good ones you’d never have planned. Instead of hoarding best guesses, splurge on exploration.
For myself, this freedom to explore without commitment is quite liberating. What about you?
Most design work today involves an expensive translation layer. A designer creates a mockup, then a developer interprets it. Details get lost and feedback cycles stretch weeks.
The real problem isn’t lost details—it’s false validation. Static mockups look convincing but hide critical flaws. They don’t show how animations feel, how forms behave with real data, or how layouts break on different screens. Teams make decisions based on approximations, then discover real problems after development is finished.
This often leads to cycles of expensive revisions. What looked perfect in Figma may require significant rework when built. Performance constraints force design compromises. Edge cases expose interaction problems invisible in static mockups.
AI tools are changing this equation, now more than ever.
Last week, I went to PressConf 2025 with around 140 other WordPress folks—a far cry from the thousands you’d see at typical conferences, but that’s kind of what made it special.
PressConf brought together leaders in our space, in an environment designed for authenticity.
A leaked internal memo from Shopify’s CEO Tobi is making waves. His message is blunt: AI literacy isn’t optional—it’s a fundamental expectation for every employee.
Change happens fast—yesterday’s futuristic concepts are today’s baseline. AI proficiency is quickly becoming the new coding literacy—if it hasn’t already. Understanding how to effectively prompt, contextualize, prototype, and evaluate AI outputs isn’t just beneficial; it’s required.
It’s intriguing that teams are encouraged to see autonomous AI agents as teammates, rather than defaulting to hiring more people. Scale a team’s impact without proportionally increasing their size—seems like a win to me.
Obviously this natural tension here—between AI’s efficiency and the potential loss of human insight—is worth a deeper think. How would your team leverage autonomous AI teammates alongside human talent? At Automattic, we’re experimenting on this front; a subtle shift, for now, but the concepts are there—and worth doubling down on.
More than ever, being good at your job means being good at learning. The most important skill in the age of AI isn’t coding, prompting, or any technical proficiency; it’s adaptability.
AI won’t replace our creativity, empathy, or taste—but it can amplify them. The real question isn’t whether we’ll adopt AI, but whether we’ll adapt quickly enough.
I cloned my voice with PlayAI to create an ai voice support agent for my WordPress theme, Kanso. What started as a simple way to add audio to my blog posts became something much more interesting, and engaging.
Making it work was surprisingly straightforward—I needed just 23 seconds of my voice to create a convincing clone. Wild.
With the voice clone ready, I compiled questions from the launch post, support emails, and Slack DMs into a structured JSON file—each with its corresponding answer. Along with these Q&A pairs, I added guardrails to reduce potential mishaps, like instructing the agent not to reference the WordPress Customizer since Kanso is a block theme.
I then built a WordPress block using Cursor and the PlayAI Web SDK, giving me more control over how people engage with the agent than the standard embed could offer.
The agent works well enough, but it tends to drift off topic sometimes. Eventually I’d like to explore adding actions, like providing visual aids alongside conversations.
Fine-tuning the agent was surreal—hearing an uncanny version of my voice explaining my ideas back to me, with admittedly more patience than myself. Strange and intriguing at the same time.
Give it a try—ask it something about block themes, especially if you’re using Kanso. Maybe this is what support will look like: your voice, your knowledge, but available any time.
An interesting read from Brad Frost on how AI is influencing the evolution of design systems, highlighting the balance between automation and human creativity in crafting cohesive and scalable digital experiences.
This stands out to me most:
We still need critical thinking. Ethical thinking. Systematic thinking. We still need to foster relationships. To build bridges. To coordinate. To orchestrate. These are human things. These are the skills that designers and developers need to cultivate and grow in order to continue to be viable in our AI age.
This interview with David Lee, the CCO of Squarespace, poses an interesting question: Will AI make human creativity a luxury?
Creativity is not just about technical skill or craft; it’s about identifying what resonates with people and captures the human experience. That is, taste.
In today’s world, where AI is increasingly capable of creating art, music, and literature, taste matters more than ever. I agree with David here—taste is the currency of this inevitable future, which perhaps is already here.
Good taste helps creatives produce resonate work that reflects personal style and originality. It involves a level of vulnerability, exposure to diverse influences, trusting instincts, and knowing when to follow or diverge from trends.
I made this video to showcase building a dynamic Gutenberg block with the help of artificial intelligence using ChatGPT.
I run through how to scaffold a block using the wordpress/create-block package, structuring a dynamic block, building the block, and using ChatGPT to write an accompanying PHP function.