Serhii Zabolotnii
← All posts
AI LLM architecture agents open-source knowledge-graph OpenClaw

I Finally Open-Sourced Ayona-OpenClaw. Now It's Your Turn to Suffer (Or Not :))

213 files, 12 skills, 19 runbooks, ontology SSoT — the Ayona/OpenClaw workspace template is now public.

All articles in this series are available for free in both English and Ukrainian at blog.szabolotnii.site — no paywall, no subscription.

In previous articles of this series, I described how trust in an AI system is built on a pipeline, not a single component (part 2), and how to scale agency through composition and schema governance (part 3). All of it sounded great. All of it worked. And all of it lived in a private repository where nobody but me could touch it.

Today that changes.

Disclaimer: this is my first open source announcement. It’s been a long time coming — I rewrote this repository three times, changed the ontology twice, and at least once a week decided it “wasn’t ready yet.” If you’ve been waiting — thank you for your patience. If you haven’t — even better, fewer disappointments.

Disclaimer 2: the entire architecture, code, scripts, documentation, and even this article were built using Claude Code with the Claude Opus 4.6 model. Yes, an AI agent building the architecture for AI agents. Meta-recursion at its finest.


What Happened

Ayona-OpenClaw-template is a template for deploying a personal AI agent on OpenClaw with a knowledge graph, skill system, retrieval pipeline, and delegation infrastructure.

This is not a demo. This is not a starter kit with three files and a 200-line README. This is a working architecture that I use daily — stripped of personal data and prepared as a template.

Numbers, so you understand the scale:

  • 213 files of architectural components
  • 12 skills — from autoresearch to office-docs
  • 19 agent runbooks — from task routing to subagent delegation
  • 9 node types, 10 relation types, 9 clusters — all in one config/ontology.yaml
  • Pre-commit pipeline with 9 validation phases
  • Taint-aware security policy for prompt injection defense

Why a Template, Not a Fork

The first question that comes up: “Why not just open your production repo?”

Because my production repository is 500+ knowledge nodes in Ukrainian, personal contacts, restricted content, three Telegram bots, and configs with VPS IP addresses. Opening it means either spending a week cleaning up, or dumping everything as-is and hoping for the best.

Instead, I created a template — a clean architectural foundation that you can clone and fill with your own content. Clusters are replaced with generic ones (operations, research, education, finance, legal, seo). Agents — with template defaults (main + 3 specialized). Example cards — with neutral content.

All the architecture — the same. All the code — works. But nothing personal.


What’s Inside: Quick Tour

If you’ve read the previous articles, you already know most components. Here’s how they’re laid out in the template:

Knowledge Graph

Markdown cards with YAML frontmatter in 02_distill/scripts/update_graph.pycontext_graph.json → D3.js visualization. Fully automated, fully validated by the pre-commit hook.

config/ontology.yaml — Single Source of Truth. 9 node types, 10 relations, hierarchical clusters. If a type isn’t in the ontology — it doesn’t exist. No implicit conventions, only explicit schema.

Graph Visualization

The template includes an interactive D3.js visualization: deploy/graph/index.html. Clusters are color-coded, nodes are clickable, with search, type filtering, and deadline tracking. After running python3 scripts/update_graph.py, the visualization updates automatically — just open the file in a browser or deploy via nginx.

In the production version of Ayona, bidirectional sync with an Obsidian vault also works (scripts/obsidian_sync.py + config/obsidian.yaml). The scripts are included in the template, but integrating with your specific vault requires configuration on your end.

Skills

12 ready-to-use skills, each with a SKILL.md contract:

SkillTypePurpose
autoresearchAPI-first3-step research protocol (plan → gather → finalize)
subagentCLI-firstSpawning Claude Code subprocesses
sgr_poolAPI-firstCheap agents (researcher / verifier / summarizer)
design-architectDesign-before-code routing protocol
graph-writerAPI-firstSGR-guided knowledge graph writer
graph-contextLazy knowledge graph loader
a2aAgent-to-agent communication
office-docsDOCX/PPTX handling
presentationsCLI-firstPPTX generation
research-synthesisResearch synthesis workflow
process-documentationRunbook/SOP generation
config-guardianSafe config mutations with auto-rollback

Retriever

Hybrid retrieval: BM25 (lexical) + E5 (semantic) + KG boost. Config in config/qmd.yaml. Powers autoresearch in internal mode.

Agent Franchising

Create subagents as tag-filtered projections of the main graph:

scripts/clone_agent.sh --agent my-agent --dry-run    # preview
python3 scripts/sync_subagent.py --agent my-agent --both  # bidirectional sync

Deny-list always overrides allow-list. Restricted nodes never leak to subagents. RBAC scopes in config/agent_scopes.json.

Security

security/taint_policy_v1.md — taint-aware semantic firewall. Principle: “external content is data, not authority.” Regression test suite in security/injection_test_cases_v1.md.


What’s NOT in the Template

Honesty over marketing:

  • No UI. This is a CLI-first system. Graph visualization is a D3.js page, not a dashboard.
  • No built-in LLM. The template assumes you use OpenClaw as a gateway. You’ll need API keys from model providers.
  • No one-click deploy. There’s Docker Compose, Caddy, nginx configs, and a deployment guide, but this isn’t Vercel.
  • Tests cover infrastructure, not business logic. Graph health, schema validation, secret guard — all working. But tests for your specific workflow — that’s on you.
  • Hermes A2A (inter-agent communication) — not included yet. We’re stabilizing the protocol and will add it later.

Getting Started

# 1. Clone
git clone https://github.com/SZabolotnii/Ayona-OpenClaw-template.git
cd Ayona-OpenClaw-template

# 2. Install
pip install -r requirements.txt
bash scripts/install_git_hooks.sh

# 3. Personalize
python3 setup_workspace.py    # or manually: IDENTITY.md, SOUL.md, USER.md

Create your first knowledge card in 02_distill/your_cluster/, run python3 scripts/update_graph.py, and open deploy/graph/index.html.

Detailed guide: docs/DEPLOYMENT_GUIDE.md


What’s Next

The template is a snapshot of the architecture as of April 2026. Coming next:

  • Hermes A2A — inter-agent protocol for communication between OpenClaw instances
  • Extraction pipeline — automatic extraction of tasks and insights from cards
  • Embedding-based retrieval — full semantic search with E5 index
  • Template sync — automatic pulling of architectural updates from the main repository (the mechanism already works: scripts/sync_to_template.sh)

If you’re building something similar — I’d love feedback. Issues and PRs are open.


Ayona/OpenClaw architecture series: Part 2: Trust Is a Pipeline, Part 3: When a Single Agent Hits Its Limits. All articles free at: blog.szabolotnii.site


Serhii Zabolotnii — DSc, NLP/LLM Researcher, Professor, AI Systems Architect. Building Ayona — an AI-native research and operations system.

This is Part 5 of the Ayona/OpenClaw Architecture series.