Ready to unlock dream cycles, the critic, and the dashboard?

Free tier works out of the box. Pro adds full directive lifecycle, scheduled reflection, and the visualization dashboard.

Start Free See Pro Plans →
Quickstart

Standalone SDK

Use the SDK directly in any Python project. Persistent memory, directives, dream cycles, and health monitoring. Zero infrastructure required.

pip install wisdom-layer

Get started ↓
Tier 1 — Premier Integrations (tested, sample code)
Tool

MCP Server

Expose wisdom to Claude Code, Cursor, or any MCP-compatible tool. Memory, directives, dreams via stdio.

pip install "wisdom-layer[mcp]"
Get started ↓
Integration

LangGraph

Drop-in nodes for LangGraph StateGraph pipelines. Add persistent, compounding memory to any agent graph.

3-node Quick Start validated end-to-end; tools-pattern reference is API-correct but not yet in CI.

pip install "wisdom-layer[langgraph]"
Get started ↓
Integration

Claude Agent SDK

Persistent memory and behavioral evolution for agents built with Anthropic’s Claude Agent SDK.

pip install claude-agent-sdk "wisdom-layer[anthropic]"
Get started ↓
Tier 2 — Coming Soon (active development)
Reference docs

OpenAI Agents SDK

Patterns for adding Wisdom Layer alongside the openai-agents framework.

Reference docs available; native wrapper in active development. Patterns are API-correct but not yet validated in CI.

Read the docs →
Reference pattern

OpenClaw

Persistent memory and behavioral evolution for OpenClaw agents.

Reference integration pattern available; packaged skill ships in v1.1.

Read the docs →

Working with CrewAI, Pydantic AI, AutoGen, or another framework? The standalone SDK plus WisdomAgent + CallableAdapter drops in anywhere. See the standalone quickstart →

Get notified when new integrations and reference implementations ship.

Quickstart — Standalone SDK

1. Install

pip install "wisdom-layer[ollama]"    # local LLM (free)
# or: pip install "wisdom-layer[anthropic]" and set ANTHROPIC_API_KEY

2. Create an Agent

from wisdom_layer import WisdomAgent, AgentConfig
from wisdom_layer.storage.sqlite import SQLiteBackend
from wisdom_layer.llm.ollama import OllamaAdapter

llm = OllamaAdapter(model="llama3.2")
backend = SQLiteBackend("agent.db", embed_fn=llm.embed)
config = AgentConfig.for_dev(name="My Agent", role="Assistant")

agent = WisdomAgent(
    agent_id="my-agent",
    config=config,
    llm=llm,
    backend=backend,
)
await agent.initialize()

3. Capture & Recall

# Store a memory
await agent.memory.capture(
    event_type="observation",
    content={"text": "User prefers concise answers"},
)

# Search memories
results = await agent.memory.search("user preferences", limit=5)

4. Trigger a Dream Cycle

# Reflect: consolidate memories, evolve directives, audit coherence
report = await agent.dreams.trigger()

# Check health
health = await agent.health()
print(f"Wisdom score: {health.wisdom_score}")
print(f"Status: {health.classification}")

5. Visualize

# Open the dashboard in a browser
pip install "wisdom-layer[dashboard]"
wisdom-layer-dashboard --db agent.db

LangGraph Integration

Add persistent, compounding memory to any LangGraph agent. Your agent remembers past conversations, learns behavioral directives, and improves over time.

Install

pip install "wisdom-layer[langgraph,ollama]"
ollama pull llama3.2

Quick Start — 3-Node Graph

from langgraph.graph import END, START, StateGraph
from wisdom_layer import WisdomAgent, AgentConfig
from wisdom_layer.storage.sqlite import SQLiteBackend
from wisdom_layer.llm.ollama import OllamaAdapter
from wisdom_layer.integration.langgraph import (
    WisdomCaptureNode,
    WisdomRecallNode,
)

# State definition
class AgentState(TypedDict):
    messages: list[dict[str, str]]
    wisdom_context: list[dict[str, Any]]

# Set up agent
llm = OllamaAdapter(model="llama3.2")
backend = SQLiteBackend("my_agent.db", embed_fn=llm.embed)
config = AgentConfig.for_dev(name="My Agent", role="Helpful assistant")
agent = WisdomAgent(agent_id="my-agent", config=config, llm=llm, backend=backend)
await agent.initialize()

# Your LLM node uses wisdom context
async def call_llm(state):
    wisdom = state.get("wisdom_context", [])
    context = "\n".join(f"- {m['content']}" for m in wisdom)
    system = f"You are a helpful assistant.\n\nRelevant memories:\n{context}"
    user_msg = state["messages"][-1]["content"]
    response = await llm.generate(
        messages=[{"role": "user", "content": user_msg}],
        system=system,
    )
    return {"messages": [*state["messages"], {"role": "assistant", "content": response}]}

# Build the graph
graph = StateGraph(AgentState)
graph.add_node("recall", WisdomRecallNode(agent))
graph.add_node("llm", call_llm)
graph.add_node("capture", WisdomCaptureNode(agent))
graph.add_edge(START, "recall")
graph.add_edge("recall", "llm")
graph.add_edge("llm", "capture")
graph.add_edge("capture", END)

app = graph.compile()
result = await app.ainvoke({
    "messages": [{"role": "user", "content": "Hello!"}],
    "wisdom_context": [],
})

Available Nodes

NodeParametersDescription
WisdomRecallNode agent, limit=5, context_key, message_key Searches memory, writes results to state
WisdomCaptureNode agent, event_type="interaction", message_key Captures last exchange as a memory
WisdomDreamNode agent, result_key="dream_result" Triggers a reflection cycle
WisdomDirectivesNode agent, context_key="wisdom_directives" Retrieves active directives for prompt injection

WisdomStore (Cross-Thread Persistence)

from wisdom_layer.integration.langchain import WisdomStore

store = WisdomStore(agent)
app = graph.compile(store=store)

This makes wisdom memory available across threads via LangGraph's store parameter injection.

MCP Server

Expose your agent's capabilities to Claude Code, Cursor, Windsurf, or any MCP-compatible AI tool via the standard Model Context Protocol.

Install & Start

pip install "wisdom-layer[mcp,anthropic]"
export ANTHROPIC_API_KEY=sk-ant-...
wisdom-layer-mcp --db wisdom.db --agent-id my-agent

Configure Claude Code

Add to .claude/settings.local.json:

{
  "mcpServers": {
    "wisdom-layer": {
      "command": "wisdom-layer-mcp",
      "args": ["--db", "/path/to/wisdom.db", "--agent-id", "my-agent"],
      "env": {
        "ANTHROPIC_API_KEY": "sk-ant-..."
      }
    }
  }
}

Configure Cursor

Add to .cursor/mcp.json:

{
  "mcpServers": {
    "wisdom-layer": {
      "command": "wisdom-layer-mcp",
      "args": ["--db", "/path/to/wisdom.db", "--agent-id", "my-agent"],
      "env": {
        "ANTHROPIC_API_KEY": "sk-ant-..."
      }
    }
  }
}

Available Tools

ToolDescription
wisdom_captureStore a memory (observation, interaction, feedback)
wisdom_recallSemantic search across all memory tiers
wisdom_healthGet the agent's cognitive health report
wisdom_directivesList active behavioral directives
wisdom_add_directiveAdd a new behavioral rule
wisdom_dreamTrigger a reflection cycle
wisdom_provenanceTrace the origin/history of any entity

Available Resources

URIDescription
wisdom://configAgent configuration (name, role, tier)
wisdom://directivesActive directives as structured data
wisdom://healthCurrent health report snapshot

CLI Options

FlagDefaultDescription
--dbwisdom.dbSQLite database path
--agent-idmcp-agentAgent ID
--transportstdiostdio | sse | streamable-http
--log-levelWARNINGDEBUG | INFO | WARNING | ERROR

LLM Auto-Detection

The CLI detects your LLM from environment variables, in order:

Claude Agent SDK Integration

Add persistent memory and behavioral evolution to agents built with Anthropic’s Claude Agent SDK.

Install

pip install claude-agent-sdk "wisdom-layer[anthropic]"
export ANTHROPIC_API_KEY=sk-ant-...

Quick Start

from wisdom_layer import WisdomAgent, AgentConfig
from wisdom_layer.storage.sqlite import SQLiteBackend
from wisdom_layer.llm.anthropic import AnthropicAdapter

llm = AnthropicAdapter(model="claude-sonnet-4-6")
backend = SQLiteBackend("agent.db", embed_fn=llm.embed)
config = AgentConfig.for_dev(name="My Agent", role="Assistant")

agent = WisdomAgent(
    agent_id="my-agent",
    config=config,
    llm=llm,
    backend=backend,
)
await agent.initialize()

# Capture context from conversations
await agent.memory.capture(
    event_type="interaction",
    content={"text": "User discussed Q3 migration timeline"},
)

# Recall relevant memories for the next turn
context = await agent.memory.search("migration timeline", limit=5)

# Trigger reflection to evolve directives
report = await agent.dreams.trigger()

This pattern is production-validated — it backs an internal multi-agent product (Omniboard) running on the same Wisdom Layer + Claude Agent SDK stack. Full reference repo and tutorial publishing alongside the v1.1 framework helpers.

See the working claude_agent_sdk_quickstart.py for the full lifecycle — capture, recall, critic veto, and dream cycles around a Claude-powered turn.

How It Works

The Wisdom Layer SDK models its cognitive architecture on how humans learn. Hover any surface below to see what it does:

WisdomAgent
agent.*

Hover (or tap on mobile) any surface for the public method signature and what it owns. Full reference: API docs.

Three-Tier Memory

Dream Cycles

Autonomous reflection pipeline (schedulable or manual):

Directives

Self-authored behavioral rules with a lifecycle: provisional → active → permanent. Reinforced through usage, decayed when unused. Every directive has full provenance tracking.

Health Monitoring

A composite wisdom score (0–1) with five components: memory diversity, directive coherence, reflection frequency, learning velocity, and knowledge depth. Automatic classification: healthy / stagnant / drifting / overloaded.

Dashboard

Visualize your agent's cognitive architecture in a browser.

pip install "wisdom-layer[dashboard]"
wisdom-layer-dashboard --db agent.db

Five screens: Health (wisdom score, trajectory), Directives (lifecycle, provenance), Memory (search, tier distribution), Dreams (cycle history, scheduling), and Configuration (feature flags, personality, resource limits).

Programmatic

from wisdom_layer.dashboard import mount_dashboard

app = mount_dashboard(agent)
# Serve with uvicorn on any port