Skip to main content

OpenClaw plugin

What it does

Ingestion

Automatically ingests agent replies, tool calls, and observations into Membrane via after_agent_reply and after_tool_call hooks.

Memory search

Exposes the membrane_search tool so agents can query graph-aware memory with natural language.

Auto-context

Injects relevant memories into the agent's context before each turn via the before_agent_start hook — no explicit tool calls required.

Status command

The /membrane command reports connection status and live memory stats.

Prerequisites

  • A running Membrane instance (membraned daemon)
  • OpenClaw v0.10+

Installation

Install from npm

In your OpenClaw extensions directory:

npm install @vainplex/openclaw-membrane
npx brainplex init

The brainplex init command auto-detects and configures all plugins.

Configure the plugin

Add the plugin entry to your openclaw.yaml under plugins.entries:

plugins:
entries:
openclaw-membrane:
enabled: true
config:
grpc_endpoint: "localhost:4222"
default_sensitivity: "low"
auto_context: true
context_limit: 5
min_salience: 0.3
context_types: ["episodic", "semantic", "competence"]

Start the Membrane daemon

Make sure membraned is running and reachable at the configured grpc_endpoint:

./bin/membraned

Configuration reference

All options live under plugins.entries.openclaw-membrane.config in openclaw.yaml.

grpc_endpointstring

Membrane gRPC address. Defaults to "localhost:4222".

default_sensitivitystring

Sensitivity level applied to all ingested events. One of "public", "low", "medium", "high", "hyper". Defaults to "low".

auto_contextboolean

When true, injects relevant memories into the agent's context before each turn via the before_agent_start hook. Defaults to true.

context_limitinteger

Maximum number of memories to inject as context. Minimum 1. Defaults to 5.

min_saliencenumber

Minimum salience score (0–1) for retrieval during context injection and search. Defaults to 0.3.

context_typesarray

Memory types to include in context injection. Valid values: "episodic", "working", "entity", "semantic", "competence", "plan_graph". Defaults to ["episodic", "semantic", "competence"].

Configuration summary table

OptionDefaultDescription
grpc_endpointlocalhost:4222Membrane gRPC address
default_sensitivitylowSensitivity for ingested events: public, low, medium, high, hyper
auto_contexttrueAuto-inject memories before each agent turn
context_limit5Max memories to inject
min_salience0.3Minimum salience score for retrieval
context_types["episodic", "semantic", "competence"]Memory types: episodic, working, entity, semantic, competence, plan_graph

membrane_search tool

The plugin registers the membrane_search tool, which agents can call to query graph-aware memory:

membrane_search("what happened in yesterday's meeting", { limit: 10 })

Parameters

querystringrequired

Natural language query to search memories.

limitinteger

Maximum results to return. Defaults to the configured context_limit (5).

memory_typesarray

Filter by memory type: "episodic", "working", "entity", "semantic", "competence", "plan_graph".

min_saliencenumber

Minimum salience score (0–1). Defaults to the configured min_salience (0.3).

// Search with type filter
membrane_search("auth middleware patterns", {
memory_types: ["competence", "semantic"],
limit: 5
})

// Search with salience filter
membrane_search("recent deploy failures", {
memory_types: ["episodic"],
min_salience: 0.6,
limit: 10
})

Auto-context

When auto_context: true (the default), the plugin hooks into before_agent_start to retrieve and inject relevant memories before each agent turn. Agents get awareness of past interactions without explicit tool calls.

The injected context looks like:

Episodic memory from Membrane:
1. [episodic] Agent reply: Refactored the auth middleware to use...
2. [semantic] User prefers TypeScript for new services
3. [competence] To fix linker cache error: clear cache, rebuild with flags

Context injection uses the context_types and min_salience config values. Set auto_context: false to disable.

Tip

Increase context_limit for long-running sessions with deep history, or lower min_salience to surface less-reinforced memories.


/membrane command

Check connection status and memory stats at any time:

/membrane
→ Membrane: connected (localhost:4222) | 1,247 records | 3 memory types

If the daemon is unreachable, the command reports the disconnected state without crashing the agent.


Ingestion behavior

The plugin maps OpenClaw hooks to captureMemory source kinds:

HookCapture source kindCondition
after_tool_calltool_outputWhen toolName is present
after_agent_replyeventAlways
Other hooksobservationFallback

Tags are automatically built from the event: hook:<name>, agent:<id>, tool:<name>, session:<key>.

Ingestion failures are logged as warnings and do not interrupt the agent.


Architecture

OpenClaw Agent

├── after_agent_reply ──→ captureMemory(sourceKind=event)
├── after_tool_call ────→ captureMemory(sourceKind=tool_output)
├── before_agent_start ─→ retrieveGraph() → inject context

└── membrane_search ───→ retrieveGraph() → return results


Membrane (gRPC)
┌─────────────┐
│ membraned │
│ SQLCipher │
│ Embeddings │
└─────────────┘

Plugin metadata

FieldValue
Package@vainplex/openclaw-membrane
Plugin IDopenclaw-membrane
Version0.4.0
Kindmemory
Hooksafter_agent_reply, after_tool_call, before_agent_start
Toolsmembrane_search
Commands/membrane