I wrote ClawExplorer.AI because the OpenClaw community had a discovery problem: the events existed, but they were spread across Luma, Eventbrite, Meetup, and one-off pages. The useful thing was not another announcement. The useful thing was a directory.

What I built
ClawExplorer.AI is a public directory for OpenClaw events: ClawCons, workshops, meetups, hackathons, and related community gatherings. It gives people one place to search by city, country, organizer, status, and date instead of asking them to follow scattered event links across multiple platforms.
The repository started on April 30, 2026 with the initial commit Initial ClawExplorer site import. The live site is static where it can be static, dynamic only where it needs to accept submissions, and designed around a reviewed data pipeline rather than ad hoc manual edits.
The workflow
The system has two paths into the same dataset.
Collector path. A maintained seed list points at OpenClaw event pages on Luma, Eventbrite, and Meetup. The collector fetches pages, extracts source fields, normalizes each record, classifies event format and topic tags, deduplicates records, applies verification metadata, and writes packaged JSON outputs.
Community path. A visitor can submit an event through the site. The browser builds a record that matches the event schema, posts it to a small PHP submit endpoint, and the endpoint validates, rate-limits, checks for duplicates, and opens a GitHub pull request against directory-manual.json. A maintainer reviews and merges. The same build and deploy path handles the result.
That matters because both paths converge. Pipeline-found events and community-submitted events are not two separate systems. The homepage fetches directory-public.json and directory-manual.json, dedupes on canonical_event_id, and renders the union.
What runs after merge
A merge to main kicks off GitHub Actions. The deploy workflow validates the directory JSON, builds static event detail pages, builds the master calendar file, builds the sitemap, stages only public-safe artifacts, strips internal-only data before deploy, and rsyncs the output to the GoDaddy document root.
There is also a separate geocoding workflow. It refreshes location cache data after directory changes instead of slowing every deploy. That separation keeps the audit trail clear: one workflow builds and ships the site, another enriches coordinates when needed.
Why this shape
I wanted the directory to be boring in the places that matter. Static pages are easy to cache and inspect. JSON outputs are easy to validate. Pull requests make community submissions reviewable. A small PHP endpoint fits the hosting environment without adding another vendor. GitHub Actions gives the workflow a visible audit trail.
The current dataset shape reflects that bias. At the time I reviewed the repo for this write-up, the packaged run summary showed 44 fetched pages, 42 canonical records, 38 publishable public records, and QA passed. The public records were classified across meetups, workshops, social events, a bootcamp, a conference, and unknowns where the source signal was not strong enough to guess.
What this demonstrates
ClawExplorer is a small site, but it is a useful pattern: use AI-assisted research and implementation to turn scattered community demand into a durable public artifact. The hard part is not only making a page. The hard part is designing the pipeline so the page can keep being true after the first launch.
That means source preservation, repeatable extraction, schema validation, dedupe, review queues, deploy checks, public/private artifact boundaries, and a way for the community to add data without bypassing review.
Links
Written by Tim Gregg, founder of CompleteTech LLC — Innovation at Every Integration.
