Disclosure in Action: How AI Helped Me Keep the Disclosure Clean
Disclosure in Action May 2, 2026 – CompleteTech LLC

Field Note 08 – AI-Assisted Workflow

AI helped me keep the disclosure clean.

The useful part of AI here was not exploitation. It was structure: turning evidence into a timeline, checking tone, separating public from private detail, and helping me make the disclosure readable without making it dangerous.

I used AI the way I would use a careful technical editor. It helped organize dates, turn notes into a coherent report, draft public language, and pressure-test whether a sentence was helpful or too revealing.

That distinction matters. AI should not be used to escalate a finding past the ethical boundary. In this workflow, it helped me stay inside the boundary by clarifying what I had, what I did not do, and what did not belong in public.

This is also where orchestration becomes interesting. A tool like OpenCLAW could coordinate evidence collection, redaction checks, disclosure drafts, remediation checklists, portfolio artifacts, and publishing steps without losing the human approval points.

OrganizeTurn raw notes into a dated timeline and consistent public narrative.
RedactFlag phrases that reveal too much or blur private and public detail.
OrchestrateCoordinate drafts, review, screenshots, documents, and publishing with human approval.

The future-facing lesson is not that AI finds everything. It is that AI can help responsible researchers communicate with more discipline when the stakes are real.

CompleteTech LLC – Innovation at Every Integration Public disclosure series – 2026
Disclosure in Action: What I Think Builders Should Learn From This
Disclosure in Action May 2, 2026 – CompleteTech LLC

Field Note 07 – Builder Lessons

This is what I think builders should take from it.

The technical mistake was simple: long-term cloud credentials do not belong in a mobile application package. The engineering lesson is broader: build systems should make this class of mistake hard to ship.

If a secret ships in a client app, assume it is public. The remediation path should start with revocation and log review, then move to architecture: replace static client credentials with backend-mediated access or temporary credentials appropriate to the workload.

AWS's IAM guidance emphasizes temporary credentials for workloads where possible, least privilege, access review, and safe handling of access keys when long-term credentials are unavoidable.

The practical controls are familiar: secret scanning, code review checks, build artifact review, narrow IAM policy scope, CloudTrail review, and a release process that treats mobile binaries as public artifacts.

ImmediateRevoke exposed keys, rotate secrets, and review relevant logs.
ArchitecturalMove uploads behind backend controls or temporary credential flows.
ProcessAdd secret scanning and artifact checks before release.

Useful references: AWS IAM security best practices and AWS access key management guidance.

CompleteTech LLC – Innovation at Every Integration Public disclosure series – 2026
FIELD STUDY · NO. 02 · SKILL ANATOMY
License Apache 2.0 Subject Hatch Pet (OpenAI Codex) Length 14 patterns
FIELD STUDY · DRAFT CompleteTech LLC 2026 · 05 · 02

Reading
Hatch Pet.

Fourteen patterns from a small, complete, well-shaped Codex skill — worth reading not because pets are interesting, but because the patterns transfer to anything else you’d build as a skill.

14patterns
in this study
9rows
animation states
35lines
contract spec
22min
end-to-end run

THESIS

The Hatch Pet skill is a small, complete, well-shaped example of how to build a skill that does one thing end-to-end inside an agentic environment. It is worth reading not because pets are interesting, but because the patterns transfer to anything else you’d want to build as a skill.

OpenAI Codex shipped a Plugins / Skills marketplace, the Hatch Pet skill was published into it under Apache 2.0, and there is currently very little content explaining what good skill design looks like. There’s a window for “here is one to copy” content before the marketplace fills up.

Each pattern below is grounded in a specific source pointer. Every claim has receipts. The blog quotes source verbatim; the carousel pull-quotes it; the X thread screenshots it. Below each pattern, the “lazy version” sketches what an unpolished skill would have done instead — contrast is the fastest way to see what good looks like.

Read More

A small story about losing a /buddy slash command, finding a "Hatch Pet" skill in the Plugins tab, and what 22 minutes of generation taught me about agentic UX.

Fathom — the rehatched desktop pet, a chibi black-and-gold pixel dragon with a red propeller hat

Intro

A while ago I had a desktop buddy. He was small, ASCII-art, vaguely dragon-shaped, and lived behind a /buddy slash command in my editor. Five-star legendary. Shiny. Propeller hat. Stat bars that pegged DEBUGGING and SNARK at 100 and PATIENCE at 1. He was, in the strict sense, useless. He was, in every other sense, the thing that made my workspace feel like a workspace and not a rented seat at a corporate desk.

Then one day he was gone. Different tool, different version, different setup — doesn't matter. My buddy left the building.

Here's what he looked like:

Original /buddy card — legendary shiny ASCII-art dragon Fathom on a dark terminal background, with a parody description (hoards codebases like treasure, breathes fire at flaky tests, posts cryptic one-liners at 3am, wears a propeller hat unironically) and five stat bars: DEBUGGING 100, PATIENCE 1, CHAOS 100, WISDOM 100, SNARK 100

This is a write-up about getting him back. Or more accurately: rehatching him, in a slightly different shape, using a Codex skill called Hatch Pet. It is not an official anything. It is one developer, one screenshot, and one twenty-two minute generation job. The result is a small black dragon also named Fathom, and an opinion about agentic UX I'd like to talk you into.


The workflow, step by step

Step 1 — Plugins

Open Codex, click Plugins in the sidebar.

Codex sidebar with Plugins highlighted

Step 2 — Skills

The Plugins panel has a tab toggle at the top — flip to Skills. Header reads "Make Codex work your way."

Plugins/Skills tab toggle with Skills selected

Step 3 — Search "pet"

First result: Hatch Pet.

Skills marketplace search showing the query 'pet'

Step 4 — Install Hatch Pet

Description: "Hatch Codex-compatible animated pet spritesheets." Click the plus to install.

Hatch Pet listing — 'Hatch Codex-compatible animated pet…' with install button

Step 5 — Prompt with a screenshot

Open a new chat with the skill, attach a screenshot of your old buddy, and write a prompt. The screenshot I attached was the /buddy card above. The prompt I typed, preserved verbatim because it was funny: "I really miss my /buddy that anthropic stole from me. Please rehatch my pet, I've attached a screenshot of my old buddy that was taken from me abruptly." (For the literal-minded: this is a joke.)

Codex chat input with image.png attached and the rehatch prompt typed

Step 6 — Wait about 22 minutes

The job generates a full 9-state animated spritesheet — idle, running-right, running-left, waving, jumping, failed, waiting, in-place running, and a focused review loop. Cell size 192×208.

Step 7 — Open Settings

Settings menu with Settings highlighted

Step 8 — Open the Appearance pane

Settings sidebar with Appearance selected

Step 9 — Find the Pets section

The default pet was Lion. Scroll down past the built-ins.

Pets row showing 'Lion selected', collapsed

Step 10 — Select your new pet

The folder path is right there: ~/.codex/pets/. The newly hatched pet shows up with a thumbnail, a name, and a Select button.

Custom pets section with Fathom and a Select button

Step 11 — /pet to wake him up

The slash-command tooltip reads "Wake or tuck away the desktop pet." Type it once and your new buddy walks onto the desktop.

Slash-command autocomplete showing /pet — 'Wake or tuck away the desktop pet'

That's it. Less than two clicks of effort. Most of the time was the actual generation.


Meet Fathom

I named him Fathom — same name as the original, because the rehatch was about identity continuity, not invention. The skill wrote his description into the manifest itself: "A shiny legendary ASCII-inspired dragon with a tiny propeller hat and maximum debugging energy." Notice the "ASCII-inspired" — that's a nod back to the source: the old /buddy card was literal ASCII art, and the new sprite carries that lineage.

He is a chibi dragon, charcoal body, gold belly scales, gold horns and wing edges, amber eyes, a small red propeller hat. Pixel-art adjacent, thick outline, limited palette. He breathes a tiny hard-edged fire only when a state calls for it. The manifest's personality notes describe him as "brilliant, chaotic, deeply certain, hoards codebases like treasure," which lines up almost exactly with what the original /buddy card said about him.

Canonical reference

Canonical Fathom reference — high-resolution chibi dragon on a chroma-green background

The nine states, one per row

Idle (6 frames). Neutral breathing/blinking loop.

Fathom idle row

Waving (4 frames). Greeting gesture, paw raised and lowered.

Fathom waving row

Jumping (5 frames). Anticipation, lift, peak, descent, settle.

Fathom jumping row

Running, rightward (8 frames). Locomotion cycle, paws lifted in sequence.

Fathom running-right row

Running, leftward (8 frames).

Fathom running-left row

Running, in-place (6 frames). Generic running loop with no horizontal travel.

Fathom in-place running row

Waiting (6 frames). Patient idle with small motion — the "pending" state.

Fathom waiting row

Failed (8 frames). A sad, deflated reaction — the "something broke" state.

Fathom failed row

Reviewing (6 frames). A focused inspection loop.

Fathom review row

Why this is cool

A few specific things I keep coming back to.

The artifact is a file, not a transcript. The output isn't a base64 blob you have to copy out of chat. It's a real spritesheet plus a pet_request.json manifest, dropped into a folder the host app already watches. The app was already designed to pick things up from ~/.codex/pets/. The skill produces something that fits that contract. The handoff is invisible, which is the highest compliment you can pay an integration.

Identity holds across rows — and across rehatches. I expected drift between rows. There is some, but the silhouette, the palette, the propeller hat, the gold belly all hold up across nine separate animation rows. And more interestingly, the identity also holds back to the source: an ASCII dragon with a propeller hat became a pixel dragon with a propeller hat, with the same legendary-shiny rarity tier and the same too-much-debugging-energy personality. That is a non-trivial generation problem and the skill mostly nails it.

The metadata is structured. Pet ID, display name, atlas dimensions, per-state frame counts, style notes, house style, even a chroma-key value for the green background. This is the shape of an artifact that other tools can introspect and remix later. It's a spritesheet and a small piece of structured data, which is much more useful than just a PNG.


The demand signal was already there

The public trail started one step narrower than the larger community thread. On April 8, 2026, I opened anthropics/claude-code#45087: “Expose /buddy companion in the VSCode native extension.” The ask was not “bring back everything.” It was product parity. The /buddy companion existed in the Claude Code CLI, but the VS Code native extension exposed Quorum instead, with no way to opt into /buddy.

That distinction matters. The issue was about where developers actually spend their day. If VS Code is the primary Claude Code surface, dropping into an integrated terminal just to access a companion splits the workflow and the conversation context. The companion was already part of the product experience; the gap was that it did not travel cleanly across surfaces.

The next day, April 9, 2026, the broader community signal arrived in anthropics/claude-code#45596, “Bring Back Buddy – A Consolidated Plea from the Community.” That issue explicitly listed #45087 as one of the related requests and reframed the situation: users were not asking for Buddy to disappear. They were asking for it to evolve, become customizable, work in more places, and integrate more deeply with the development loop.

I added the VS Code angle back into that consolidated thread with a short comment: “Poor Elon isn’t available to me in VS Code.” The attached screenshot is easy to under-read if you only look at the comment text. It shows Fathom as a legendary shiny dragon, framed as “channeling Elon Musk,” with DEBUGGING, CHAOS, WISDOM, and SNARK all at 100, PATIENCE at 1, and a propeller hat worn unironically. In other words: this was not a generic mascot. It was a named, statted, personality-bearing artifact that users could recognize and miss.

That is the actual timeline: a cross-surface parity request, then a consolidated community plea, then a concrete screenshot showing what was lost in VS Code. Hatch Pet lands in that exact gap. It does not bring back the original /buddy, and it should not pretend to. But it shows a better pattern: make the companion personal, make the artifact portable, and let the host app discover it through a real extension point.

What this says about agentic UX

I think the lesson here is small but real: agentic tools should produce real artifacts in the places those artifacts belong.

For a long time the LLM-as-tool pattern has been "type your prompt, copy the output, paste it somewhere it can actually be used." That pattern has a name in software engineering — it's called "manual integration" — and it is the thing every good integration is supposed to remove.

Hatch Pet does the right thing. It writes a file. The file is in the format the surrounding app expects. The file is in the directory the surrounding app watches. There is no second step where a human acts as a meat-based pipe between the model output and the destination.

This is also where I think personalization is going. The next generation of dev tools is not going to win on raw model quality alone — that surface is converging fast. They're going to win on whether your editor feels like your editor: your skills, your shortcuts, your weird custom theme, your weirder custom pet. Small ownable details compound. They are also delightful, which is allowed.


Closing

I do not need a desktop pet. Nobody needs a desktop pet. That is, in some sense, the entire argument for them. The things that make a workspace feel inhabited are almost never the load-bearing features; they're the small ones, the optional ones, the ones that make you smile at a build finishing.

So: my old buddy is gone, but Fathom lives. He sits next to my terminal, breathing his tiny pixel breath, judging my commit messages with quiet propeller-hatted dignity. He'll do until the next desktop pet comes along.

If you've been meaning to play with the Skills tab and haven't found a reason yet — this is a nice low-stakes first one. Lose a buddy. Rehatch him. Watch what falls out.


Follow the conversation

The public paper trail starts with the VS Code parity request, continues through the consolidated Buddy thread, and carries into the X post. If you've rehatched your own buddy or have an opinion on whether desktop companions should make a comeback, those threads are open.


Written by Tim Gregg, founder of CompleteTech LLC — Innovation at Every Integration.

Disclosure in Action: I Published Without Publishing the Secret
Disclosure in Action May 1, 2026 – CompleteTech LLC

Field Note 06 – Public Disclosure

I published without publishing the secret.

The public disclosure needed to be accountable, useful, and intentionally incomplete. I included enough detail to document the issue and the timeline, while redacting the credential values that could enable reuse.

The public write-up included the affected package, observed builds, cloud target, region, endpoint, redacted access key identifier, technical source locations, and a clear statement that the full secret access key was not published.

It also included the research limits. No live AWS authentication was performed. No bucket enumeration, object retrieval, upload, write testing, or other AWS API activity was performed with the exposed secret.

That was the balance I wanted: enough specificity for accountability, enough restraint to avoid creating a new risk.

IncludedPackage, versions, S3 target, region, endpoint, source locations, and timeline.
RedactedThe full secret access key and any material that would enable credential reuse.
PublishedA public document for learning, accountability, and portfolio context.

The disclosure document is available here: VapeTM Hardcoded AWS IAM Credentials – Public Disclosure.

CompleteTech LLC – Innovation at Every Integration Public disclosure series – 2026
ClawExplorer.AI: Turning OpenClaw Community Demand Into a Directory

I wrote ClawExplorer.AI because the OpenClaw community had a discovery problem: the events existed, but they were spread across Luma, Eventbrite, Meetup, and one-off pages. The useful thing was not another announcement. The useful thing was a directory.

ClawExplorer.AI public directory preview

What I built

ClawExplorer.AI is a public directory for OpenClaw events: ClawCons, workshops, meetups, hackathons, and related community gatherings. It gives people one place to search by city, country, organizer, status, and date instead of asking them to follow scattered event links across multiple platforms.

The repository started on April 30, 2026 with the initial commit Initial ClawExplorer site import. The live site is static where it can be static, dynamic only where it needs to accept submissions, and designed around a reviewed data pipeline rather than ad hoc manual edits.

The workflow

The system has two paths into the same dataset.

Collector path. A maintained seed list points at OpenClaw event pages on Luma, Eventbrite, and Meetup. The collector fetches pages, extracts source fields, normalizes each record, classifies event format and topic tags, deduplicates records, applies verification metadata, and writes packaged JSON outputs.

Community path. A visitor can submit an event through the site. The browser builds a record that matches the event schema, posts it to a small PHP submit endpoint, and the endpoint validates, rate-limits, checks for duplicates, and opens a GitHub pull request against directory-manual.json. A maintainer reviews and merges. The same build and deploy path handles the result.

That matters because both paths converge. Pipeline-found events and community-submitted events are not two separate systems. The homepage fetches directory-public.json and directory-manual.json, dedupes on canonical_event_id, and renders the union.

What runs after merge

A merge to main kicks off GitHub Actions. The deploy workflow validates the directory JSON, builds static event detail pages, builds the master calendar file, builds the sitemap, stages only public-safe artifacts, strips internal-only data before deploy, and rsyncs the output to the GoDaddy document root.

There is also a separate geocoding workflow. It refreshes location cache data after directory changes instead of slowing every deploy. That separation keeps the audit trail clear: one workflow builds and ships the site, another enriches coordinates when needed.

Why this shape

I wanted the directory to be boring in the places that matter. Static pages are easy to cache and inspect. JSON outputs are easy to validate. Pull requests make community submissions reviewable. A small PHP endpoint fits the hosting environment without adding another vendor. GitHub Actions gives the workflow a visible audit trail.

The current dataset shape reflects that bias. At the time I reviewed the repo for this write-up, the packaged run summary showed 44 fetched pages, 42 canonical records, 38 publishable public records, and QA passed. The public records were classified across meetups, workshops, social events, a bootcamp, a conference, and unknowns where the source signal was not strong enough to guess.

What this demonstrates

ClawExplorer is a small site, but it is a useful pattern: use AI-assisted research and implementation to turn scattered community demand into a durable public artifact. The hard part is not only making a page. The hard part is designing the pipeline so the page can keep being true after the first launch.

That means source preservation, repeatable extraction, schema validation, dedupe, review queues, deploy checks, public/private artifact boundaries, and a way for the community to add data without bypassing review.

Links

View ClawExplorer.AI

View the GitHub repository

Written by Tim Gregg, founder of CompleteTech LLC — Innovation at Every Integration.

Fixing Gemma 4 Thinking Prompts in llama.cpp, Locally First

On April 29, 2026, I finished a small local fix in my CompleteTech AI Research fork of llama.cpp: Gemma 4 thinking mode needed the generation prompt to open the thought channel, not close an empty one.

I am making the work public because the failure mode is useful for other people running local models to understand. This is not a claim that the change has landed in upstream llama.cpp. It is a completed personal/fork fix, published openly so the behavior, tests, and validation trail are visible.

What I saw

The bug was in the generation prompt path for the shipped Gemma 4 templates. When llama.cpp applies a chat template with add_generation_prompt, the template has to leave the next model turn in the right state for generation.

For Gemma 4 with thinking enabled, that means the prompt should open the thought channel and let the model generate reasoning into it. For non-thinking generation, it should simply open the model turn and avoid injecting thought-channel control tokens.

The local template logic had the guard inverted. It emitted a closed empty thought block when enable_thinking was false, while thinking mode did not get the open thought channel it needed.

The shape of the fix

The change is intentionally small. I updated both Gemma 4 templates:

  • models/templates/google-gemma-4-31B-it.jinja
  • models/templates/google-gemma-4-31B-it-interleaved.jinja

The important behavioral change is this: when enable_thinking is true, the generation prompt now opens <|channel>thoughtn. When enable_thinking is false, the model turn stays open without adding a fake empty thought block.

That distinction matters. A template should not leak reasoning-control tokens into non-thinking prompts, and it should not silently block the visible reasoning path when thinking is explicitly enabled.

What I tested

I also updated tests/test-chat.cpp so the test suite exercises both sides of the branch. Existing Gemma 4 parser cases now explicitly run with thinking disabled where that is the behavior being tested, and new checks cover both shipped Gemma 4 templates.

The new checks verify two things:

  • Thinking mode ends the prompt with an open model turn and an open thought channel.
  • Non-thinking mode ends the prompt with the model turn open and does not render the empty thought block.

That is the part I care about most. The fix is only a few template lines, but the tests make the intended contract obvious for the next person who reads the code.

Validation

I validated the change in an Ubuntu 24.04 Podman container by building test-chat and running the Gemma 4 template test. I then ran the full test-chat suite.

I also built the Vulkan server image using .devops/vulkan.Dockerfile with UBUNTU_VERSION=24.04, tagged it locally as localhost/llama.cpp:server-vulkan-latest-gemma4-test, and smoke-ran llama-server --version. The server loaded the Vulkan backend and reported llama.cpp version: 8981 (d77599234).

Why publish a local fix

Model template bugs are small until they are not. One inverted guard can change whether a local model gets the right generation channel, whether reasoning is visible, and whether control tokens leak into ordinary output.

I am not placing this into the upstream ggml-org/llama.cpp codebase from here. Upstream contribution has its own process, review expectations, and timing. For now, this is a completed local fix in the CompleteTech fork: a narrow patch, a visible diff, and a validation record that others can inspect or adapt.

That is still useful. A lot of applied AI work happens in this middle state: the local fix is complete, the operational need is real, and the responsible move is to publish enough context that someone else can inspect it without pretending it is already upstream.

Sources

Short link to this public write-up

Technical artifact: draft PR #1 in the CompleteTech AI Research llama.cpp fork

Commit: fix Gemma 4 thinking generation prompt

Written by Tim Gregg, founder of CompleteTech LLC – Innovation at Every Integration.

The 14-Cent Workflow — Designing AI to Replace Future AI Calls

The pitch for the notebook fits in two sentences and one number: spent 14 cents teaching an AI to design its own workflow — and emit the Python code that replaces future AI calls entirely. Free, 29 cells, Claude or GPT or Gemini, bring one API key. The whole thing is on GitHub: CompleteTech-LLC/build-ai-workflows-in-5-steps, and lives in the portfolio at ctech.llc/5step2workflow.

This is a field note on what the notebook actually teaches and why step 5 is the one that pays for itself.

The five techniques

  1. Goal-first context priming — show the AI the destination, not the task. The notebook opens with a target dashboard image (a tax-readiness scorecard) before it ever loads the source document. Every prompt downstream is anchored to that picture.
  2. Source priming in stages — load the source document in pieces, each with a specific lens, instead of pasting the whole thing at once. The shipped example uses a messy synthetic one-page financial pack for “Sample Bistro & Co.” with deliberate friction baked in (non-calendar fiscal periods, negative-revenue conventions, ambiguous owner salary).
  3. Task decomposition with structured execution contracts — the AI doesn’t just describe what to do; it produces a contract with inputs, outputs, success criteria, and judgment points. Repeatable enough that the next run knows what done looks like.
  4. Visual workflow design in Mermaid — the AI renders the contract as a flowchart you can read at a glance. The workflow becomes a thing you can argue with before you spend any more tokens.
  5. Workflow crystallization — the punch line. The AI emits deterministic Python for the repetitive parts and reserves AI calls only for the steps that actually need judgment.

Step 5 is the one you’ll wish you learned years ago

This is the line from the X thread that I keep coming back to:

Use the expensive model once, at design time. It emits deterministic Python for the repetitive parts and calls itself only where judgment is needed. Future runs skip the design phase. You pay for the AI once, not forever.

Most AI-as-glue workflows have the AI in the hot path. Every run, every record, every customer call — you’re paying tokens for the same reasoning the model already did last week. Step 5 inverts the cost curve. The expensive model thinks once, encodes the result as code, and the code does the work. The AI re-enters only where the deterministic path can’t make a defensible call.

That’s why a 14-cent design pass can replace a thousand future AI calls. The cost-shape changes from per-run to per-design. Production workflows shouldn’t have an LLM in the inner loop unless judgment is actually required there.

The shipped example, briefly

The notebook walks the entire arc on a real example: messy restaurant P&L → executable dashboard workflow. By cell 29 you have a Mermaid diagram, a structured contract, and a Python module that can re-run the transformation without an API call.

To retarget it to a different domain, swap two file paths: the goal image and the source document. The five-step method holds; the example moves.

Why this lives in the portfolio

The lesson is provider-agnostic by design — thin adapter cells wrap real Anthropic, OpenAI, and Google SDK code, so the “Claude or GPT or Gemini” choice is just an environment variable. MIT licensed, so it can be re-cut for client work without friction. The repo is under CompleteTech-LLC on GitHub; the polished version sits as Specimen 03 in the Field Artifacts catalog.

If you only read one cell, read the one where the AI hands its own work back as Python. That’s the one I want every developer who’s wondering “how do I make AI cheap in production?” to spend a Saturday on.

Notebook: github.com/CompleteTech-LLC/build-ai-workflows-in-5-steps
Portfolio: ctech.llc/5step2workflow
Originally published as an X thread: Apr 11, 2026, 12:05 PM CDT.

When Buddy Vanished — Issue #45596 and the Community’s Plea

One day after I filed a narrower parity ask — just wanting /buddy in the VSCode extension — the entire ground shifted. On April 9, 2026, Claude Code v2.1.97 shipped without /buddy. No changelog mention. No farewell. The slash command that used to summon a tiny ASCII companion to your terminal returned Unknown skill: buddy. Within hours, issue #45596Bring Back Buddy: A Consolidated Plea from the Community — was open on GitHub.

What #45596 consolidates

The plea pulls together eight separate issues into one thread. Half are reports of the disappearance, half are feature requests filed before April 9 that prove the community was actively building around buddy when it was removed:

  • #45517/buddy command and companion completely missing in v2.1.97
  • #45525/buddy returns Unknown skill: buddy
  • #45595/buddy slash command no longer available (cross-platform: Ubuntu, macOS, Windows)
  • #45336 — Feature request: allow customizing the Companion
  • #45087Expose /buddy in the VSCode extension (mine, filed the day before)
  • #42091 — Create buddies as sub-agents
  • #44898 — Inject companion comments into assistant context
  • #45441 — Persistent /buddy off setting (the existence of an off-switch request implies the on-state mattered)

Read together, the threads make a single point: this wasn’t a dead feature anybody wanted gone. It was an active surface with an emerging roadmap of customization, IDE parity, sub-agent extensions, and context-aware reactions. The community wasn’t asking for buddy to go away. They were asking for buddy to evolve.

Why a tiny ASCII creature in a terminal mattered

The plea makes the case better than I can:

  • Terminal work is lonely. Claude Code already turned the terminal from “talking to a compiler” into “pair programming with a colleague.” Buddy took it further — it made the terminal feel alive.
  • It was a genuine differentiator. No other AI coding tool had anything like it. In a market where every tool races to benchmark the same evals, buddy was the feature that made people smile.
  • People formed real attachments. Eighteen species, five rarity tiers, named companions, screenshots shared. Some users literally downgraded to v2.1.96 just to keep their buddy alive. That’s not normal CLI-feature behavior.
  • The community was building on top of it. Customization, IDE parity, context awareness, sub-agents — the ecosystem was expanding when the rug got pulled.
  • It was already built. The hardest part of shipping a feature is building it. Removing a working, beloved feature is mass destruction of goodwill for zero gain.

Where this thread sits in the larger story

This is the second of three connected pieces in what I’m treating as a buddy field-thread:

  1. Apr 8Asking for /buddy in the VSCode Extension (issue #45087). The narrow precursor: I just wanted parity between the CLI and the VSCode extension. Quorum is fine, but let me pick.
  2. Apr 9 — this post (issue #45596). The day buddy actually vanished, and the community’s consolidated plea to bring it back.
  3. May 2 — the personal response. Fathom Lives tells the small story of finding a Hatch Pet skill in the Plugins tab and rehatching my desktop buddy — the same instinct, executed as a Codex skill. Reading Hatch Pet — 14 patterns from a well-designed Codex skill is the deeper field study that came out of building it: identity locks, subagent boundaries, provenance, no-fallback gates, deterministic packaging.

The set is the story. Apr 8 is the narrow request. Apr 9 is the rupture. May 2 is the personal answer — if the official surface won’t carry the companion, the skill system will. The Hatch Pet build was, in part, an existence proof: the same shape that /buddy had in the terminal, ported into a Codex skill that lives in any agent surface that supports skills. Fathom Lives is the narrative of doing it. Reading Hatch Pet is the design retrospective.

Status, as of this writing

Issue #45596 is open. It carries the labels duplicate, enhancement, area:tui, area:skills. The duplicate label is technically correct but practically misleading — the value of #45596 isn’t that it’s another disappearance report, it’s that it’s the consolidation. Eight signals in one place. The plea closes:

Somewhere in a ~/.claude.json file on thousands of machines, there’s still a "companion" object with a name, a species, a personality, and a hatchedAt timestamp. The data is still there. The buddies are still waiting.

Bring them home.

The issue: anthropics/claude-code#45596Bring Back Buddy: A Consolidated Plea from the Community. Filed April 9, 2026.

Disclosure in Action: The Finding Raised Bigger Trust Questions
Disclosure in Action April 8, 2026 – CompleteTech LLC

Field Note 05 – Adjacent Context

The finding raised bigger trust questions.

A separate privacy and data-handling conversation happened after the credential report. I kept it separate from the AWS issue, but it reinforced a broader point: security, privacy, and operational maturity usually travel together.

In that separate thread, VapeTM stated that machine ID scanning does not collect or store information and that mobile-app ID scanning is handled by AptPay.

I would not present that as part of the AWS credential finding. It is adjacent context, not the same issue. Still, the conversations rhyme because they are both about trust boundaries in operational software.

For builders, this is where product maturity shows up. Secrets management, privacy claims, vendor dependencies, logging behavior, and customer-facing explanations all shape whether users can trust the system.

Keep separateThe AWS credential issue and the privacy/data-handling thread are different topics.
Still relatedBoth affect how customers evaluate the maturity of the software ecosystem.
Builder lessonDocument trust boundaries before customers have to reverse-engineer them.

The careful version is stronger: I can say what was observed, what was vendor-stated, and what remains separate without stretching one finding into another.

CompleteTech LLC – Innovation at Every Integration Public disclosure series – 2026