AI

Exploring the evolving world of artificial intelligence, and the ways it’s shaping our future and daily lives.

  • A Red Pen

    Craft is caring enough to get the details right.

    Matthew Butterick argues this for typography in Practical Typography. Reader attention is finite. Every typographic choice you make either earns it or loses it.

    Readers judge your typography before they read a word. That’s how they decide whether it’s worth their time.

    Butterick made the case for typography. Orwell made it for prose in Politics and the English Language: no stale metaphors, short words over long ones, cut anything you can cut, active voice, no jargon, and break any rule rather than write something ugly.

    They’re different disciplines, but with the same instinct: sweat the details and respect the reader. These rules aren’t hard to follow themselves, but doing them consistently, on everything you publish, is.

    So I built an agent skill that runs in Claude Code and Cowork.

    Agent skills are folders of instructions and resources that agents can discover to do things well (like editing).

    "Screenshot of the Claude.ai chat interface showing a greeting that reads 'Good evening, Rich' with the Anthropic logo, and a message input field containing '/red-pen I have a post I'm drafting. Mind checking it?' with the Opus 4.6 model selected."can you give me alt text for this9:29 PM"Claude.ai chat interface greeting 'Good evening, Rich' with the Anthropic spark logo. The message composer shows a draft prompt reading '/red-pen I have a post I'm drafting. Mind checking it?' with '/red-pen' and 'a post' highlighted in blue as a skill command and file attachment. The Opus 4.6 model is selected, with an orange send button and a plus icon for attachments.

    It’s a red pen.

    It checks my drafts against the craft rules I set and shows me where I need to tighten up.

    I care about this because craft compounds. Sloppy writing erodes trust one post at a time. The red pen has made my writing better. Maybe it’ll help yours too.

    Check out my Red Pen skill →

  • Studio CLI: Local WordPress in Seconds

    The WordPress Studio team just shipped the Studio CLI as a standalone npm package.

    If you’ve used the Studio desktop app, you know the pitch: fast local WordPress development powered by WordPress Playground. Now all of that works from your terminal. No desktop app, no Docker, no Apache configs.

    Install it with npm install -g wp-studio and run studio site create to get a local WordPress site up in actual seconds.

    You, or more likely your agents, can even configure sites with custom domains, HTTPS, and specific WordPress or PHP versions:

    studio site set --domain mysite.wp.local --https --wp 6.8

    You can also authenticate with WordPress.com to create a temporary preview site and (soon) push/pull sync:

    studio auth login
    studio preview create

    The Studio app is nice. I run it every day I’m building with WordPress. But putting it in the terminal means our agents can use it too. Just tell your agent about the package, or give it my post here, and you’ll be up and running in no time.

    It’s free to install and use. Early access is live now on Mac, Windows, and Linux (a first for Studio).

    Get WordPress Studio CLI →

  • Meet the WordPress AI Providers

    The new WP AI Client in WordPress 7.0 is provider-agnostic. This means you get to choose what AI providers you want to use, while plugins just add functionality via abilities.

  • How to Use the WordPress AI Client

    The upcoming release of WordPress 7.0 ships with the new WP AI Client. You will be able to can call any AI model (Anthropic, Google, OpenAI, others) through a single API, in PHP directly and from JavaScript using the Abilities API.

  • API is the UI

    The fastest-growing users of our products are agents. And agents don’t need interfaces.

    Agents do not need buttons, visual hierarchy, hover states, or spinners. They need APIs, structured data, and predictable endpoints (and to know about them).

    What matters underneath is the primitive: block schemas, data models, structured content. The formats they produce (markdown, HTML, JSON) are addressable, diffable, and writable by both humans and machines. Most interface chrome is just a convenience layer on top.

    You can already see this shift in code editing.

    A year ago, writing software meant living inside a code editor, manually creating and editing files. Today, tools like Codex, Claude Code, and Telex have moved much of that primary workflow into a chat interface.

    The code editor still exists, but its role has changed: you’re often reviewing, fine-tuning, and steering while the editor becomes secondary.

    The same shift is happening in website building: an agent does not need a block inserter or drag-and-drop chrome. It needs a clear schema for what a page can be and a stable way to write to it (cue the block model for WordPress).

    The visual editor becomes the place where a human reviews what the agent built and fine-tunes from there. Which means we’re now designing for two audiences.

    The primary audience is increasingly the agent: give it clean APIs, predictable structures, and fast execution paths.

    The secondary audience is the human: they need controls to edit, review, refine, and redirect, but most of all they need confidence. Did the agent do something that supports their goal? Did it meet their standards? How do we communicate what the agent did and why? How do we help humans and agents stay aligned? And when they’re not aligned, how do we make it easy to redirect?

    We’ve always treated the human interface as the product, the shape of buttons, the depth of shadows, the flow from connecting accounts to purchasing to completing the job. The API was often an afterthought for integrations or technical requirements.

    Human interfaces are not going away, but they are becoming less central. I’ve spent my career building interfaces, but now the most important work I do is what happens beneath them.

    API is the new UI.

  • Giving my blog a voice

    I’ve found myself choosing audio more often, like in last week’s experiment, interviews.now. It’s nice, especially when I’m walking, driving, or just stepping away from a screen.

    I wanted to explore adding audio to my blog in a way that stays simple and doesn’t add any friction to how I publish—at all.

    So yea, you can listen to my posts, read in my voice.

  • I built an agent that interviews WordPress users

    I built an ai voice agent that interviews people about WordPress. Three minutes, their honest take, with structured insights delivered on the other side.

    The way I see it, conversations are variables—context, intent, memory, tone. Set them right and agents handle structured research while you focus on judgment calls.

    Humans are irreplaceable for empathy, judgment, shared experience. But for conversations that are structured and repeatable? Agents are in.

  • The Claude Code Subagents I Use Daily

    I built agents.foo to share the Claude Code subagents I actually reach for every day.

    After weeks of building subagents in Claude Code, I’ve settled into a handful that earned their place in my workflow. Not the flashy demos you see everywhere, but mostly boring—but super useful—agents that make me a little faster.

    My favorite is the Linear product manager. Perfect for those moments when you discover a bug but don’t want to lose momentum on what you’re already working on. It creates thoughtfully executed issues right off.

    The agent knows Linear’s data model inside and out. It explores my codebase to find relevant components, references exact file paths, and includes technical context that actually helps. When I prompt “make an issue that the login button is broken on mobile,” right in Claude Code, it creates a structured issue with proper steps to reproduce, expected behavior, and links to the affected components directly.

    It’s like having a technical triage person sitting next to you. One that writes great issues.

    What’s surprising to me is that we’ve already distilled agentic programming down to simple markdown files. No really complex frameworks or orchestration layers. Just clear instructions about what you want the agent to know and how you want it to help.

    This feels like how AI should work. Instead of wrestling with general-purpose models every time, we create specialized helpers that understand our specific tools, projects, and patterns.

    The agents on agents.foo represent what coding with AI actually looks like. Not revolutionary breakthroughs, but reliable helpers that handle the repetitive parts of building software.

    I’m sharing them because we’re each still figuring out how AI fits into real workflows. These work for me. Maybe they’ll spark ideas for your own daily drivers. Have you created any subagents lately that you’ve found interesting?

  • I Don’t Vibe Code

    You know what I mean by vibe coding? That approach where you throw prompts at an AI, get code back, and ship it without caring about what’s actually under the hood. It’s the “move fast and ship” mentality taken to an extreme.

    That’s not me. I build with Claude Code every day, but I care about what’s being built.

    The difference is partnership versus just getting code generated. AI is great for removing friction in development, but only when you guide it properly. I don’t need to understand every technical implementation detail, but I absolutely need to understand how to prompt these systems well and how to tell good output from garbage.

    This is coding at a different level of abstraction. Way less debugging, more strategic thinking.

    Vibe coding relies on blind trust. You ask for a feature, get some code, and assume it works because it runs. The approach I follow (the same one the best engineers have always used) involves asking the right questions, steering toward better solutions, and reviewing code.

    Here’s what I mean: I have Claude Code and Copilot work as complementary tools. I scope ideas and technical details with Claude, then either log them as issues for later or tackle them right away. And when it’s time for a pull request, I have the other AI pair programmer handle the review.

    They function like tireless pair programmers and I’m the technical lead kicking off the effort and making the calls.

    The key insight is guidance versus automation. Vibe coders stop at “it works,” but I’m concerned with whether the solution is correct and sustainable. Will this scale? Is the architecture sound? Does it handle edge cases?

    The details matter because understanding structure and correctness keeps you from shipping nonsense. Maybe that changes as AI-augmented engineering gets better, but today, having judgment about the code is what separates functional software from potential disasters.

    Even these new AI “vibe-coding” platforms like Lovable, GitHub Spark, and v0 offer a choice: you can vibe with the no-code interface, or you can peek under the hood.

    We’re curious beings, so be curious. Watch how your prompts and choices translate into code, learning along the way.

    That’s the approach I advocate. Use these tools, absolutely, but stay engaged and curious about how they work. Ask questions. Understand patterns. Let each project teach you something new and you’ve already won.

    The best developers are better at prompting, better product leaders, better strategic thinkers, and better at tracking what changed in their codebase. The good news: those skills transfer whether you’re working with AI or humans.

    So yea, I don’t vibe code.

  • What Nobody Tells You About AI Coding

    Nobody tells you that AI-augmented coding makes implementation skills abundant and strategic thinking priceless.

    Last weekend, I experimented with a simple chatbot for my blog. I got it working in thirty minutes. Even just a few months ago, this was a nights and weekends endeavor, at best.

    Well, mostly working. RAG pipeline, embeddings, and the whole thing connected and responding to questions about my posts—but with the occasional hallucination I hadn’t fixed yet.

    Something felt different. Where was the part where I spent three hours debugging why the vector database wouldn’t talk to the frontend? Where was the documentation rabbit hole? The Stack Overflow shame spiral?

    None of that happened.

    It got me thinking about where we are today as engineers compared to a few months ago. A shift at the core of our practice is happening in real time.

    I used to think good developers were fast typists who memorized syntax. Now they’re people who know what to build and recognize when it’s built right. The muscle memory that took years to develop? Less relevant every month.

    I have a friend who writes everything from scratch and is proud to understand every line. Meanwhile, others ship releases in the time it takes him to set up authentication. In a way, we’re both right. My friend understands his codebase perfectly. I understand my users’ problems perfectly. Different skills for a different game.

    I find I’m doing much more creative work now as well. When you’re not burned out from wrestling with dependencies and import statements, you have brain space for the interesting questions. What should this actually do? How should it feel to use? What problem are we solving? Does it matter?

    Even yesterday, a newer developer asked if AI would make them obsolete. Wrong question. The right question is: what kind of developer do you want to be? The kind who can implement anything, or the kind who knows what’s worth implementing?

    Both matter. But only one is getting scarcer.

  • Ridiculous Swings

    I used to kill ideas before they had a real chance.

    Every spark of curiosity met the same mental gate: Is this worth my time? What’s the opportunity cost? Can I even do this? The auditor in my head would run the numbers and most ideas would die right there.

    Then something shifted. Not because AI models got smarter, but because the marginal cost of curiosity drops close to zero.

    Building a prototype used to mean weeks of coding. Now it’s an afternoon conversation with Claude or v0. I explore weirder ideas. And I take ridiculous swings because ideas cost almost nothing.

    The math changed. When exploration is cheap, you stop rationing curiosity. When you can afford to explore bad ideas, you stumble onto good ones you’d never have planned. Instead of hoarding best guesses, splurge on exploration.

    For myself, this freedom to explore without commitment is quite liberating. What about you?

  • The Designer-Developer Convergence

    Most design work today involves an expensive translation layer. A designer creates a mockup, then a developer interprets it. Details get lost and feedback cycles stretch weeks.

    The real problem isn’t lost details—it’s false validation. Static mockups look convincing but hide critical flaws. They don’t show how animations feel, how forms behave with real data, or how layouts break on different screens. Teams make decisions based on approximations, then discover real problems after development is finished.

    This often leads to cycles of expensive revisions. What looked perfect in Figma may require significant rework when built. Performance constraints force design compromises. Edge cases expose interaction problems invisible in static mockups.

    AI tools are changing this equation, now more than ever.

  • PressConf 2025

    Last week, I went to PressConf 2025 with around 140 other WordPress folks—a far cry from the thousands you’d see at typical conferences, but that’s kind of what made it special.

    PressConf brought together leaders in our space, in an environment designed for authenticity. 

  • AI is the New Baseline

    A leaked internal memo from Shopify’s CEO Tobi is making waves. His message is blunt: AI literacy isn’t optional—it’s a fundamental expectation for every employee. 

    Change happens fast—yesterday’s futuristic concepts are today’s baseline. AI proficiency is quickly becoming the new coding literacy—if it hasn’t already. Understanding how to effectively prompt, contextualize, prototype, and evaluate AI outputs isn’t just beneficial; it’s required.

    It’s intriguing that teams are encouraged to see autonomous AI agents as teammates, rather than defaulting to hiring more people. Scale a team’s impact without proportionally increasing their size—seems like a win to me.

    Obviously this natural tension here—between AI’s efficiency and the potential loss of human insight—is worth a deeper think. How would your team leverage autonomous AI teammates alongside human talent? At Automattic, we’re experimenting on this front; a subtle shift, for now, but the concepts are there—and worth doubling down on.

    More than ever, being good at your job means being good at learning. The most important skill in the age of AI isn’t coding, prompting, or any technical proficiency; it’s adaptability.

    AI won’t replace our creativity, empathy, or taste—but it can amplify them. The real question isn’t whether we’ll adopt AI, but whether we’ll adapt quickly enough.

    Will you resist this change, or embrace it?

  • Experimenting with Conversational Voice AI

    I cloned my voice with PlayAI to create an ai voice support agent for my WordPress theme, Kanso. What started as a simple way to add audio to my blog posts became something much more interesting, and engaging.

    Making it work was surprisingly straightforward—I needed just 23 seconds of my voice to create a convincing clone. Wild. 

    With the voice clone ready, I compiled questions from the launch post, support emails, and Slack DMs into a structured JSON file—each with its corresponding answer. Along with these Q&A pairs, I added guardrails to reduce potential mishaps, like instructing the agent not to reference the WordPress Customizer since Kanso is a block theme.

    I then built a WordPress block using Cursor and the PlayAI Web SDK, giving me more control over how people engage with the agent than the standard embed could offer. 

    The agent works well enough, but it tends to drift off topic sometimes. Eventually I’d like to explore adding actions, like providing visual aids alongside conversations.

    Fine-tuning the agent was surreal—hearing an uncanny version of my voice explaining my ideas back to me, with admittedly more patience than myself. Strange and intriguing at the same time.

    Give it a try—ask it something about block themes, especially if you’re using Kanso. Maybe this is what support will look like: your voice, your knowledge, but available any time.