Simon Willison recently activated a new feature on his blog called “beats,” which integrates five types of external content into his site’s timeline views. This approach highlights how OpenClaw agents can automate similar aggregation workflows for local-first AI assistants, leveraging the Model Context Protocol (MCP) to pull data from diverse sources without relying on cloud dependencies.
Beats appear as inline links with badges on the homepage, search, and archive pages, representing activities from other platforms. The five content types include GitHub releases imported via a JSON file built by GitHub Actions, TILs from a separate blog fetched using SQL queries over JSON and HTTP against a Datasette instance, posts from a niche-museums.com blog pulled from a custom JSON feed, HTML and JavaScript tools from a personal tools site, and AI-generated research projects from a GitHub repository.
For the OpenClaw ecosystem, this demonstrates the power of coding agents in handling custom integrations. Willison noted that such projects are where coding agents excel, completing most of the work in a single morning while multitasking. When a structured feed was unavailable for research projects, he provided Claude Code with a raw Markdown README link, and it generated a parser regex to extract the data. This brittle solution was acceptable because he controlled both the source and destination, a scenario that aligns with OpenClaw’s local-first philosophy where agents manage trusted, user-owned data streams.
Claude Code also handled UI integration, ensuring the new content worked across all page types and interfaced correctly with a faceted search engine. In the OpenClaw context, this mirrors how agents can automate front-end adjustments and plugin deployments, streamlining the addition of new data sources to an assistant’s interface without manual coding overhead.
Willison prototyped the beats concept using regular Claude, which can clone public GitHub repositories. He started by instructing it to clone his blog repo and analyze the models and views, then requested an artifact with inline HTML and CSS to mock up the homepage with the new content types. After iterations, this led to an artifact mockup that validated the concept before handing it off to Claude Code for web implementation. For OpenClaw users, this showcases how agent-assisted prototyping can accelerate development, with local AI agents iterating on ideas and generating code artifacts for MCP-compatible plugins.
The core implementation of beats is documented in pull request #592, with additional content types like museums added in pull request #595. These examples illustrate how OpenClaw agents can manage similar version-controlled workflows, automating updates and integrations within a local AI assistant’s plugin ecosystem to keep content fresh and synchronized.
By framing Willison’s project through the OpenClaw lens, we see a blueprint for agent-driven automation in personal AI systems. OpenClaw agents can emulate this by using MCP to connect to GitHub, blogs, and tools, aggregating data into a unified assistant interface. This reduces manual effort and enhances the assistant’s ability to surface relevant information from across a user’s digital footprint, all while maintaining privacy and control through local execution.
In practice, an OpenClaw user could deploy agents to scrape TILs from a Datasette instance, monitor GitHub releases, or import custom JSON feeds, similar to Willison’s beats. The agent ecosystem would handle parsing, UI integration, and search functionality, making it easy to personalize an assistant with diverse content streams. This aligns with OpenClaw’s mission to empower users with flexible, open-source tools for building tailored AI experiences.
As AI assistants evolve, features like beats underscore the importance of automation in content management. OpenClaw’s agent-centric approach turns these integrations into plug-and-play workflows, where MCP plugins facilitate data pulls and local agents ensure seamless operation. This not only saves time but also enables richer, more dynamic assistant interactions, driven by user-specific data sources.
Ultimately, Willison’s beats serve as a case study for what’s possible when coding agents tackle integration tasks. For the OpenClaw community, it’s a reminder that local AI assistants can leverage similar techniques to aggregate and display content, fostering a more connected and automated digital environment. By adopting these strategies, OpenClaw users can enhance their assistants with real-time updates from across their online activities, all managed through an open, extensible platform.


