Get from zero to a learning agent in under 5 minutes. Pick your integration path below.
Free tier works out of the box. Pro adds full directive lifecycle, scheduled reflection, and the visualization dashboard.
Use the SDK directly in any Python project. Persistent memory, directives, dream cycles, and health monitoring. Zero infrastructure required.
Expose wisdom to Claude Code, Cursor, or any MCP-compatible tool. Memory, directives, dreams via stdio.
Drop-in nodes for LangGraph StateGraph pipelines. Add persistent, compounding memory to any agent graph.
3-node Quick Start validated end-to-end; tools-pattern reference is API-correct but not yet in CI.
Persistent memory and behavioral evolution for agents built with Anthropic’s Claude Agent SDK.
Patterns for adding Wisdom Layer alongside the openai-agents framework.
Reference docs available; native wrapper in active development. Patterns are API-correct but not yet validated in CI.
Read the docs →Persistent memory and behavioral evolution for OpenClaw agents.
Reference integration pattern available; packaged skill ships in v1.1.
Read the docs →Working with CrewAI, Pydantic AI, AutoGen, or another framework? The standalone SDK plus WisdomAgent + CallableAdapter drops in anywhere. See the standalone quickstart →
pip install "wisdom-layer[ollama]" # local LLM (free)
# or: pip install "wisdom-layer[anthropic]" and set ANTHROPIC_API_KEY from wisdom_layer import WisdomAgent, AgentConfig
from wisdom_layer.storage.sqlite import SQLiteBackend
from wisdom_layer.llm.ollama import OllamaAdapter
llm = OllamaAdapter(model="llama3.2")
backend = SQLiteBackend("agent.db", embed_fn=llm.embed)
config = AgentConfig.for_dev(name="My Agent", role="Assistant")
agent = WisdomAgent(
agent_id="my-agent",
config=config,
llm=llm,
backend=backend,
)
await agent.initialize() # Store a memory
await agent.memory.capture(
event_type="observation",
content={"text": "User prefers concise answers"},
)
# Search memories
results = await agent.memory.search("user preferences", limit=5) # Reflect: consolidate memories, evolve directives, audit coherence
report = await agent.dreams.trigger()
# Check health
health = await agent.health()
print(f"Wisdom score: {health.wisdom_score}")
print(f"Status: {health.classification}") # Open the dashboard in a browser
pip install "wisdom-layer[dashboard]"
wisdom-layer-dashboard --db agent.db Add persistent, compounding memory to any LangGraph agent. Your agent remembers past conversations, learns behavioral directives, and improves over time.
pip install "wisdom-layer[langgraph,ollama]"
ollama pull llama3.2 from langgraph.graph import END, START, StateGraph
from wisdom_layer import WisdomAgent, AgentConfig
from wisdom_layer.storage.sqlite import SQLiteBackend
from wisdom_layer.llm.ollama import OllamaAdapter
from wisdom_layer.integration.langgraph import (
WisdomCaptureNode,
WisdomRecallNode,
)
# State definition
class AgentState(TypedDict):
messages: list[dict[str, str]]
wisdom_context: list[dict[str, Any]]
# Set up agent
llm = OllamaAdapter(model="llama3.2")
backend = SQLiteBackend("my_agent.db", embed_fn=llm.embed)
config = AgentConfig.for_dev(name="My Agent", role="Helpful assistant")
agent = WisdomAgent(agent_id="my-agent", config=config, llm=llm, backend=backend)
await agent.initialize()
# Your LLM node uses wisdom context
async def call_llm(state):
wisdom = state.get("wisdom_context", [])
context = "\n".join(f"- {m['content']}" for m in wisdom)
system = f"You are a helpful assistant.\n\nRelevant memories:\n{context}"
user_msg = state["messages"][-1]["content"]
response = await llm.generate(
messages=[{"role": "user", "content": user_msg}],
system=system,
)
return {"messages": [*state["messages"], {"role": "assistant", "content": response}]}
# Build the graph
graph = StateGraph(AgentState)
graph.add_node("recall", WisdomRecallNode(agent))
graph.add_node("llm", call_llm)
graph.add_node("capture", WisdomCaptureNode(agent))
graph.add_edge(START, "recall")
graph.add_edge("recall", "llm")
graph.add_edge("llm", "capture")
graph.add_edge("capture", END)
app = graph.compile()
result = await app.ainvoke({
"messages": [{"role": "user", "content": "Hello!"}],
"wisdom_context": [],
}) | Node | Parameters | Description |
|---|---|---|
| WisdomRecallNode | agent, limit=5, context_key, message_key | Searches memory, writes results to state |
| WisdomCaptureNode | agent, event_type="interaction", message_key | Captures last exchange as a memory |
| WisdomDreamNode | agent, result_key="dream_result" | Triggers a reflection cycle |
| WisdomDirectivesNode | agent, context_key="wisdom_directives" | Retrieves active directives for prompt injection |
from wisdom_layer.integration.langchain import WisdomStore
store = WisdomStore(agent)
app = graph.compile(store=store) This makes wisdom memory available across threads via LangGraph's store parameter injection.
Expose your agent's capabilities to Claude Code, Cursor, Windsurf, or any MCP-compatible AI tool via the standard Model Context Protocol.
pip install "wisdom-layer[mcp,anthropic]"
export ANTHROPIC_API_KEY=sk-ant-...
wisdom-layer-mcp --db wisdom.db --agent-id my-agent Add to .claude/settings.local.json:
{
"mcpServers": {
"wisdom-layer": {
"command": "wisdom-layer-mcp",
"args": ["--db", "/path/to/wisdom.db", "--agent-id", "my-agent"],
"env": {
"ANTHROPIC_API_KEY": "sk-ant-..."
}
}
}
} Add to .cursor/mcp.json:
{
"mcpServers": {
"wisdom-layer": {
"command": "wisdom-layer-mcp",
"args": ["--db", "/path/to/wisdom.db", "--agent-id", "my-agent"],
"env": {
"ANTHROPIC_API_KEY": "sk-ant-..."
}
}
}
} | Tool | Description |
|---|---|
| wisdom_capture | Store a memory (observation, interaction, feedback) |
| wisdom_recall | Semantic search across all memory tiers |
| wisdom_health | Get the agent's cognitive health report |
| wisdom_directives | List active behavioral directives |
| wisdom_add_directive | Add a new behavioral rule |
| wisdom_dream | Trigger a reflection cycle |
| wisdom_provenance | Trace the origin/history of any entity |
| URI | Description |
|---|---|
| wisdom://config | Agent configuration (name, role, tier) |
| wisdom://directives | Active directives as structured data |
| wisdom://health | Current health report snapshot |
| Flag | Default | Description |
|---|---|---|
| --db | wisdom.db | SQLite database path |
| --agent-id | mcp-agent | Agent ID |
| --transport | stdio | stdio | sse | streamable-http |
| --log-level | WARNING | DEBUG | INFO | WARNING | ERROR |
The CLI detects your LLM from environment variables, in order:
Add persistent memory and behavioral evolution to agents built with Anthropic’s Claude Agent SDK.
pip install claude-agent-sdk "wisdom-layer[anthropic]"
export ANTHROPIC_API_KEY=sk-ant-... from wisdom_layer import WisdomAgent, AgentConfig
from wisdom_layer.storage.sqlite import SQLiteBackend
from wisdom_layer.llm.anthropic import AnthropicAdapter
llm = AnthropicAdapter(model="claude-sonnet-4-6")
backend = SQLiteBackend("agent.db", embed_fn=llm.embed)
config = AgentConfig.for_dev(name="My Agent", role="Assistant")
agent = WisdomAgent(
agent_id="my-agent",
config=config,
llm=llm,
backend=backend,
)
await agent.initialize()
# Capture context from conversations
await agent.memory.capture(
event_type="interaction",
content={"text": "User discussed Q3 migration timeline"},
)
# Recall relevant memories for the next turn
context = await agent.memory.search("migration timeline", limit=5)
# Trigger reflection to evolve directives
report = await agent.dreams.trigger() This pattern is production-validated — it backs an internal multi-agent product (Omniboard) running on the same Wisdom Layer + Claude Agent SDK stack. Full reference repo and tutorial publishing alongside the v1.1 framework helpers.
See the working claude_agent_sdk_quickstart.py for the full lifecycle — capture, recall, critic veto, and dream cycles around a Claude-powered turn.
The Wisdom Layer SDK models its cognitive architecture on how humans learn. Hover any surface below to see what it does:
agent.*Hover (or tap on mobile) any surface for the public method signature and what it owns. Full reference: API docs.
Autonomous reflection pipeline (schedulable or manual):
Self-authored behavioral rules with a lifecycle: provisional → active → permanent. Reinforced through usage, decayed when unused. Every directive has full provenance tracking.
A composite wisdom score (0–1) with five components: memory diversity, directive coherence, reflection frequency, learning velocity, and knowledge depth. Automatic classification: healthy / stagnant / drifting / overloaded.
Visualize your agent's cognitive architecture in a browser.
pip install "wisdom-layer[dashboard]"
wisdom-layer-dashboard --db agent.db Five screens: Health (wisdom score, trajectory), Directives (lifecycle, provenance), Memory (search, tier distribution), Dreams (cycle history, scheduling), and Configuration (feature flags, personality, resource limits).
from wisdom_layer.dashboard import mount_dashboard
app = mount_dashboard(agent)
# Serve with uvicorn on any port