ClawHub Skill Pick: self-improving-agent — The AI That Actually Learns From Its Mistakes

🟡 Install Verified


The Problem It Solves

Here's a frustration every OpenClaw power user knows: you correct the AI today, it makes the same mistake tomorrow. You fix the same bug three times. You explain the same project convention over and over.

This isn't a model quality issue — it's a memory architecture issue. Most AI agents have zero cross-session memory. Every conversation ends, and everything learned evaporates.

self-improving-agent is the fix.


Skill At a Glance

Field Details
Skill self-improving-agent
Author @pskoett
Downloads 263,000+ (🏆 #1 on ClawHub)
Stars ⭐ 2,400+
Current Version v3.0.5 (17 versions released)
License MIT-0 (free to use, no attribution required)
Security Scan VirusTotal ✅ Benign / OpenClaw ✅ Benign (high confidence)

How It Works

After installation, your AI agent automatically logs to a .learnings/ directory when specific situations occur:

Trigger Logged To
Command/operation fails .learnings/ERRORS.md
You correct the AI .learnings/LEARNINGS.md (category: correction)
You request a missing feature .learnings/FEATURE_REQUESTS.md
API or external tool fails .learnings/ERRORS.md (with integration details)
AI's knowledge was outdated .learnings/LEARNINGS.md (category: knowledge_gap)
A better approach is found .learnings/LEARNINGS.md (category: best_practice)

The real power comes from promotion: when a learning proves broadly applicable, it gets elevated to permanent workspace files:

  • SOUL.md → Behavioral guidelines and personality
  • AGENTS.md → Workflow rules and automation patterns
  • TOOLS.md → Tool-specific gotchas and best practices

Think of it as building institutional memory for your AI, one conversation at a time.


Installation

Option 1: ClawHub CLI (Recommended)

clawdhub install self-improving-agent

Option 2: Manual

git clone https://github.com/peterskoett/self-improving-agent.git \
  ~/.openclaw/skills/self-improving-agent

Initial Setup

mkdir -p ~/.openclaw/workspace/.learnings

Create three log files:

  • .learnings/LEARNINGS.md — corrections, best practices, knowledge gaps
  • .learnings/ERRORS.md — command failures, exceptions
  • .learnings/FEATURE_REQUESTS.md — user-requested capabilities

Optional: Enable the Hook

cp -r hooks/openclaw ~/.openclaw/hooks/self-improvement
openclaw hooks enable self-improvement

This injects a gentle reminder at session start to evaluate and log learnings — about 50-100 tokens of overhead.


What We Found

Installation verification results:

One-command install — no friction, no config files to edit
Clean directory structure — three focused files, each with a clear purpose
Standardized log format — every entry gets an ID, timestamp, priority level, area tag, and suggested fix
Smart promotion logic — only patterns that recur 3+ times across 2+ sessions get promoted, keeping noise low
Multi-agent support — works with OpenClaw, Claude Code, Codex, and GitHub Copilot

Here's what a learning entry looks like:

## [LRN-20260319-001] correction
**Logged**: 2026-03-19T15:00:00+08:00
**Priority**: high
**Status**: pending
**Area**: infra

### Summary
Python scripts with Unicode regex passed via SSH heredoc fail silently 
when single-quoted — use scp to upload .py files instead

### Suggested Action
Always scp script files to server, then execute remotely.
Never rely on shell heredoc for scripts with \u Unicode sequences.

Future AI sessions can read this and immediately know what went wrong and how to avoid it.


Caveats Worth Knowing

  1. Hook = reminder, not guarantee: The hook injects a reminder into context. Whether the AI actually logs depends on the model. Sonnet/Opus are more reliable than Haiku.

  2. Token overhead: ~50-100 tokens per session. Lightweight, but it's there.

  3. File write permissions: The skill writes to your workspace. OpenClaw's security scan flags this as "review before enabling" — reasonable caution, not a red flag.

  4. Sub-agent fix in v3.0.5: Previous versions had an issue with sub-agent session spawning. Fixed in the current release.


Rating

⭐⭐⭐⭐⭐ 5/5 — Highly Recommended

Best for:

  • Heavy OpenClaw users running multiple long sessions daily
  • Teams using AI for iterative development on complex projects
  • Anyone who's corrected the same AI mistake more than twice

Less useful for:

  • One-off task runners who don't repeat workflows
  • Minimalists who want zero context overhead

Pair It With

The author recommends using simplify-and-harden alongside:

clawdhub install simplify-and-harden

Together, they form a complete self-improvement loop: one logs experience, the other identifies recurring patterns and hardens them into durable guidance.


Bottom Line

263k downloads doesn't happen by accident. self-improving-agent solves a real, painful problem in a clean, structured way. If your AI keeps repeating mistakes you've already corrected, this is the first skill you should install.


Based on ClawHub skill page information and installation verification. Skill version: v3.0.5.