FIELD STUDY · NO. 02 · SKILL ANATOMY
License Apache 2.0 Subject Hatch Pet (OpenAI Codex) Length 14 patterns
FIELD STUDY · DRAFT CompleteTech LLC 2026 · 05 · 02

Reading
Hatch Pet.

Fourteen patterns from a small, complete, well-shaped Codex skill — worth reading not because pets are interesting, but because the patterns transfer to anything else you’d build as a skill.

14patterns
in this study
9rows
animation states
35lines
contract spec
22min
end-to-end run

THESIS

The Hatch Pet skill is a small, complete, well-shaped example of how to build a skill that does one thing end-to-end inside an agentic environment. It is worth reading not because pets are interesting, but because the patterns transfer to anything else you’d want to build as a skill.

OpenAI Codex shipped a Plugins / Skills marketplace, the Hatch Pet skill was published into it under Apache 2.0, and there is currently very little content explaining what good skill design looks like. There’s a window for “here is one to copy” content before the marketplace fills up.

Each pattern below is grounded in a specific source pointer. Every claim has receipts. The blog quotes source verbatim; the carousel pull-quotes it; the X thread screenshots it. Below each pattern, the “lazy version” sketches what an unpolished skill would have done instead — contrast is the fastest way to see what good looks like.

Read More

A small story about losing a /buddy slash command, finding a "Hatch Pet" skill in the Plugins tab, and what 22 minutes of generation taught me about agentic UX.

Fathom — the rehatched desktop pet, a chibi black-and-gold pixel dragon with a red propeller hat

Intro

A while ago I had a desktop buddy. He was small, ASCII-art, vaguely dragon-shaped, and lived behind a /buddy slash command in my editor. Five-star legendary. Shiny. Propeller hat. Stat bars that pegged DEBUGGING and SNARK at 100 and PATIENCE at 1. He was, in the strict sense, useless. He was, in every other sense, the thing that made my workspace feel like a workspace and not a rented seat at a corporate desk.

Then one day he was gone. Different tool, different version, different setup — doesn't matter. My buddy left the building.

Here's what he looked like:

Original /buddy card — legendary shiny ASCII-art dragon Fathom on a dark terminal background, with a parody description (hoards codebases like treasure, breathes fire at flaky tests, posts cryptic one-liners at 3am, wears a propeller hat unironically) and five stat bars: DEBUGGING 100, PATIENCE 1, CHAOS 100, WISDOM 100, SNARK 100

This is a write-up about getting him back. Or more accurately: rehatching him, in a slightly different shape, using a Codex skill called Hatch Pet. It is not an official anything. It is one developer, one screenshot, and one twenty-two minute generation job. The result is a small black dragon also named Fathom, and an opinion about agentic UX I'd like to talk you into.


The workflow, step by step

Step 1 — Plugins

Open Codex, click Plugins in the sidebar.

Codex sidebar with Plugins highlighted

Step 2 — Skills

The Plugins panel has a tab toggle at the top — flip to Skills. Header reads "Make Codex work your way."

Plugins/Skills tab toggle with Skills selected

Step 3 — Search "pet"

First result: Hatch Pet.

Skills marketplace search showing the query 'pet'

Step 4 — Install Hatch Pet

Description: "Hatch Codex-compatible animated pet spritesheets." Click the plus to install.

Hatch Pet listing — 'Hatch Codex-compatible animated pet…' with install button

Step 5 — Prompt with a screenshot

Open a new chat with the skill, attach a screenshot of your old buddy, and write a prompt. The screenshot I attached was the /buddy card above. The prompt I typed, preserved verbatim because it was funny: "I really miss my /buddy that anthropic stole from me. Please rehatch my pet, I've attached a screenshot of my old buddy that was taken from me abruptly." (For the literal-minded: this is a joke.)

Codex chat input with image.png attached and the rehatch prompt typed

Step 6 — Wait about 22 minutes

The job generates a full 9-state animated spritesheet — idle, running-right, running-left, waving, jumping, failed, waiting, in-place running, and a focused review loop. Cell size 192×208.

Step 7 — Open Settings

Settings menu with Settings highlighted

Step 8 — Open the Appearance pane

Settings sidebar with Appearance selected

Step 9 — Find the Pets section

The default pet was Lion. Scroll down past the built-ins.

Pets row showing 'Lion selected', collapsed

Step 10 — Select your new pet

The folder path is right there: ~/.codex/pets/. The newly hatched pet shows up with a thumbnail, a name, and a Select button.

Custom pets section with Fathom and a Select button

Step 11 — /pet to wake him up

The slash-command tooltip reads "Wake or tuck away the desktop pet." Type it once and your new buddy walks onto the desktop.

Slash-command autocomplete showing /pet — 'Wake or tuck away the desktop pet'

That's it. Less than two clicks of effort. Most of the time was the actual generation.


Meet Fathom

I named him Fathom — same name as the original, because the rehatch was about identity continuity, not invention. The skill wrote his description into the manifest itself: "A shiny legendary ASCII-inspired dragon with a tiny propeller hat and maximum debugging energy." Notice the "ASCII-inspired" — that's a nod back to the source: the old /buddy card was literal ASCII art, and the new sprite carries that lineage.

He is a chibi dragon, charcoal body, gold belly scales, gold horns and wing edges, amber eyes, a small red propeller hat. Pixel-art adjacent, thick outline, limited palette. He breathes a tiny hard-edged fire only when a state calls for it. The manifest's personality notes describe him as "brilliant, chaotic, deeply certain, hoards codebases like treasure," which lines up almost exactly with what the original /buddy card said about him.

Canonical reference

Canonical Fathom reference — high-resolution chibi dragon on a chroma-green background

The nine states, one per row

Idle (6 frames). Neutral breathing/blinking loop.

Fathom idle row

Waving (4 frames). Greeting gesture, paw raised and lowered.

Fathom waving row

Jumping (5 frames). Anticipation, lift, peak, descent, settle.

Fathom jumping row

Running, rightward (8 frames). Locomotion cycle, paws lifted in sequence.

Fathom running-right row

Running, leftward (8 frames).

Fathom running-left row

Running, in-place (6 frames). Generic running loop with no horizontal travel.

Fathom in-place running row

Waiting (6 frames). Patient idle with small motion — the "pending" state.

Fathom waiting row

Failed (8 frames). A sad, deflated reaction — the "something broke" state.

Fathom failed row

Reviewing (6 frames). A focused inspection loop.

Fathom review row

Why this is cool

A few specific things I keep coming back to.

The artifact is a file, not a transcript. The output isn't a base64 blob you have to copy out of chat. It's a real spritesheet plus a pet_request.json manifest, dropped into a folder the host app already watches. The app was already designed to pick things up from ~/.codex/pets/. The skill produces something that fits that contract. The handoff is invisible, which is the highest compliment you can pay an integration.

Identity holds across rows — and across rehatches. I expected drift between rows. There is some, but the silhouette, the palette, the propeller hat, the gold belly all hold up across nine separate animation rows. And more interestingly, the identity also holds back to the source: an ASCII dragon with a propeller hat became a pixel dragon with a propeller hat, with the same legendary-shiny rarity tier and the same too-much-debugging-energy personality. That is a non-trivial generation problem and the skill mostly nails it.

The metadata is structured. Pet ID, display name, atlas dimensions, per-state frame counts, style notes, house style, even a chroma-key value for the green background. This is the shape of an artifact that other tools can introspect and remix later. It's a spritesheet and a small piece of structured data, which is much more useful than just a PNG.


The demand signal was already there

The public trail started one step narrower than the larger community thread. On April 8, 2026, I opened anthropics/claude-code#45087: “Expose /buddy companion in the VSCode native extension.” The ask was not “bring back everything.” It was product parity. The /buddy companion existed in the Claude Code CLI, but the VS Code native extension exposed Quorum instead, with no way to opt into /buddy.

That distinction matters. The issue was about where developers actually spend their day. If VS Code is the primary Claude Code surface, dropping into an integrated terminal just to access a companion splits the workflow and the conversation context. The companion was already part of the product experience; the gap was that it did not travel cleanly across surfaces.

The next day, April 9, 2026, the broader community signal arrived in anthropics/claude-code#45596, “Bring Back Buddy – A Consolidated Plea from the Community.” That issue explicitly listed #45087 as one of the related requests and reframed the situation: users were not asking for Buddy to disappear. They were asking for it to evolve, become customizable, work in more places, and integrate more deeply with the development loop.

I added the VS Code angle back into that consolidated thread with a short comment: “Poor Elon isn’t available to me in VS Code.” The attached screenshot is easy to under-read if you only look at the comment text. It shows Fathom as a legendary shiny dragon, framed as “channeling Elon Musk,” with DEBUGGING, CHAOS, WISDOM, and SNARK all at 100, PATIENCE at 1, and a propeller hat worn unironically. In other words: this was not a generic mascot. It was a named, statted, personality-bearing artifact that users could recognize and miss.

That is the actual timeline: a cross-surface parity request, then a consolidated community plea, then a concrete screenshot showing what was lost in VS Code. Hatch Pet lands in that exact gap. It does not bring back the original /buddy, and it should not pretend to. But it shows a better pattern: make the companion personal, make the artifact portable, and let the host app discover it through a real extension point.

What this says about agentic UX

I think the lesson here is small but real: agentic tools should produce real artifacts in the places those artifacts belong.

For a long time the LLM-as-tool pattern has been "type your prompt, copy the output, paste it somewhere it can actually be used." That pattern has a name in software engineering — it's called "manual integration" — and it is the thing every good integration is supposed to remove.

Hatch Pet does the right thing. It writes a file. The file is in the format the surrounding app expects. The file is in the directory the surrounding app watches. There is no second step where a human acts as a meat-based pipe between the model output and the destination.

This is also where I think personalization is going. The next generation of dev tools is not going to win on raw model quality alone — that surface is converging fast. They're going to win on whether your editor feels like your editor: your skills, your shortcuts, your weird custom theme, your weirder custom pet. Small ownable details compound. They are also delightful, which is allowed.


Closing

I do not need a desktop pet. Nobody needs a desktop pet. That is, in some sense, the entire argument for them. The things that make a workspace feel inhabited are almost never the load-bearing features; they're the small ones, the optional ones, the ones that make you smile at a build finishing.

So: my old buddy is gone, but Fathom lives. He sits next to my terminal, breathing his tiny pixel breath, judging my commit messages with quiet propeller-hatted dignity. He'll do until the next desktop pet comes along.

If you've been meaning to play with the Skills tab and haven't found a reason yet — this is a nice low-stakes first one. Lose a buddy. Rehatch him. Watch what falls out.


Follow the conversation

The public paper trail starts with the VS Code parity request, continues through the consolidated Buddy thread, and carries into the X post. If you've rehatched your own buddy or have an opinion on whether desktop companions should make a comeback, those threads are open.


Written by Tim Gregg, founder of CompleteTech LLC — Innovation at Every Integration.

Fixing Gemma 4 Thinking Prompts in llama.cpp, Locally First

On April 29, 2026, I finished a small local fix in my CompleteTech AI Research fork of llama.cpp: Gemma 4 thinking mode needed the generation prompt to open the thought channel, not close an empty one.

I am making the work public because the failure mode is useful for other people running local models to understand. This is not a claim that the change has landed in upstream llama.cpp. It is a completed personal/fork fix, published openly so the behavior, tests, and validation trail are visible.

What I saw

The bug was in the generation prompt path for the shipped Gemma 4 templates. When llama.cpp applies a chat template with add_generation_prompt, the template has to leave the next model turn in the right state for generation.

For Gemma 4 with thinking enabled, that means the prompt should open the thought channel and let the model generate reasoning into it. For non-thinking generation, it should simply open the model turn and avoid injecting thought-channel control tokens.

The local template logic had the guard inverted. It emitted a closed empty thought block when enable_thinking was false, while thinking mode did not get the open thought channel it needed.

The shape of the fix

The change is intentionally small. I updated both Gemma 4 templates:

  • models/templates/google-gemma-4-31B-it.jinja
  • models/templates/google-gemma-4-31B-it-interleaved.jinja

The important behavioral change is this: when enable_thinking is true, the generation prompt now opens <|channel>thoughtn. When enable_thinking is false, the model turn stays open without adding a fake empty thought block.

That distinction matters. A template should not leak reasoning-control tokens into non-thinking prompts, and it should not silently block the visible reasoning path when thinking is explicitly enabled.

What I tested

I also updated tests/test-chat.cpp so the test suite exercises both sides of the branch. Existing Gemma 4 parser cases now explicitly run with thinking disabled where that is the behavior being tested, and new checks cover both shipped Gemma 4 templates.

The new checks verify two things:

  • Thinking mode ends the prompt with an open model turn and an open thought channel.
  • Non-thinking mode ends the prompt with the model turn open and does not render the empty thought block.

That is the part I care about most. The fix is only a few template lines, but the tests make the intended contract obvious for the next person who reads the code.

Validation

I validated the change in an Ubuntu 24.04 Podman container by building test-chat and running the Gemma 4 template test. I then ran the full test-chat suite.

I also built the Vulkan server image using .devops/vulkan.Dockerfile with UBUNTU_VERSION=24.04, tagged it locally as localhost/llama.cpp:server-vulkan-latest-gemma4-test, and smoke-ran llama-server --version. The server loaded the Vulkan backend and reported llama.cpp version: 8981 (d77599234).

Why publish a local fix

Model template bugs are small until they are not. One inverted guard can change whether a local model gets the right generation channel, whether reasoning is visible, and whether control tokens leak into ordinary output.

I am not placing this into the upstream ggml-org/llama.cpp codebase from here. Upstream contribution has its own process, review expectations, and timing. For now, this is a completed local fix in the CompleteTech fork: a narrow patch, a visible diff, and a validation record that others can inspect or adapt.

That is still useful. A lot of applied AI work happens in this middle state: the local fix is complete, the operational need is real, and the responsible move is to publish enough context that someone else can inspect it without pretending it is already upstream.

Sources

Short link to this public write-up

Technical artifact: draft PR #1 in the CompleteTech AI Research llama.cpp fork

Commit: fix Gemma 4 thinking generation prompt

Written by Tim Gregg, founder of CompleteTech LLC – Innovation at Every Integration.