Skip to main content

membrane

Import path: github.com/BennettSchwartz/membrane/pkg/membrane

The membrane package is the embedded Go entry point. It initializes storage, ingestion, retrieval, decay, revision, consolidation, embeddings, and metrics from a single Config.

Constructors

New

func New(cfg *Config) (*Membrane, error)

Initializes all subsystems from Config and returns a ready-to-start Membrane.

cfg := membrane.DefaultConfig()
cfg.DBPath = "my-agent.db"

m, err := membrane.New(cfg)
if err != nil {
log.Fatal(err)
}
defer m.Stop()

if err := m.Start(context.Background()); err != nil {
log.Fatal(err)
}

DefaultConfig

func DefaultConfig() *Config

Returns defaults for SQLite storage, daemon listen address, schedulers, retrieval limits, and policy settings.

LoadConfig

func LoadConfig(path string) (*Config, error)

Reads a YAML config file and overlays it onto DefaultConfig.

Lifecycle

func (m *Membrane) Start(ctx context.Context) error
func (m *Membrane) Stop() error

Start begins background decay and consolidation schedulers. If embeddings are configured, it also backfills missing embeddings. Stop shuts down schedulers and closes the store.


Capture

func (m *Membrane) CaptureMemory(ctx context.Context, req ingestion.CaptureMemoryRequest) (*ingestion.CaptureMemoryResponse, error)

Captures a graph-aware memory candidate and returns the primary record plus any linked records or edges.

capture, err := m.CaptureMemory(ctx, ingestion.CaptureMemoryRequest{
Source: "auth-agent",
SourceKind: "tool_output",
Content: map[string]any{
"tool_name": "go test",
"args": map[string]any{"packages": []string{"./pkg/auth"}},
"result": map[string]any{"exit_code": 0},
},
ReasonToRemember: "Successful auth package verification",
Summary: "Auth package tests passed",
Tags: []string{"auth", "tests"},
Scope: "project-auth",
Sensitivity: schema.SensitivityLow,
})
func (m *Membrane) RecordOutcome(ctx context.Context, req ingestion.IngestOutcomeRequest) (*schema.MemoryRecord, error)

Attaches an outcome to an existing episodic record. This delegates to the lower-level ingestion service outcome path.


Retrieval

func (m *Membrane) RetrieveGraph(ctx context.Context, req *retrieval.RetrieveGraphRequest) (*retrieval.RetrieveGraphResponse, error)
func (m *Membrane) RetrieveByID(ctx context.Context, id string, trust *retrieval.TrustContext) (*schema.MemoryRecord, error)

RetrieveGraph returns ranked roots and a bounded graph neighborhood. When request limits are zero, the facade applies the graph defaults from Config.

graph, err := m.RetrieveGraph(ctx, &retrieval.RetrieveGraphRequest{
TaskDescriptor: "debug auth retries",
Trust: retrieval.NewTrustContext(
schema.SensitivityMedium,
true,
"auth-agent",
[]string{"project-auth"},
),
MemoryTypes: []schema.MemoryType{
schema.MemoryTypeEntity,
schema.MemoryTypeSemantic,
schema.MemoryTypeCompetence,
schema.MemoryTypeEpisodic,
},
RootLimit: 8,
NodeLimit: 20,
EdgeLimit: 80,
MaxHops: 1,
})
if err != nil {
log.Fatal(err)
}

for _, node := range graph.Nodes {
fmt.Println(node.Record.ID, node.Record.Type, node.Root, node.Hop)
}

Revision

func (m *Membrane) Supersede(ctx context.Context, oldID string, newRec *schema.MemoryRecord, actor, rationale string) (*schema.MemoryRecord, error)
func (m *Membrane) Fork(ctx context.Context, sourceID string, forkedRec *schema.MemoryRecord, actor, rationale string) (*schema.MemoryRecord, error)
func (m *Membrane) Retract(ctx context.Context, id, actor, rationale string) error
func (m *Membrane) Merge(ctx context.Context, ids []string, mergedRec *schema.MemoryRecord, actor, rationale string) (*schema.MemoryRecord, error)
func (m *Membrane) Contest(ctx context.Context, id, contestingRef, actor, rationale string) error

Revision operations keep records auditable while allowing durable knowledge to change explicitly.

Reinforcement

func (m *Membrane) Reinforce(ctx context.Context, id, actor, rationale string) error
func (m *Membrane) Penalize(ctx context.Context, id string, amount float64, actor, rationale string) error

Use these after a record helps or misleads a downstream workflow.

Metrics

func (m *Membrane) GetMetrics(ctx context.Context) (*metrics.Snapshot, error)

Returns a point-in-time metrics snapshot.


Config

type Config struct { ... }

Important fields:

Backendstringdefault: sqlite

Storage backend: sqlite or postgres.

DBPathstringdefault: membrane.db

SQLite database path.

PostgresDSNstring

PostgreSQL connection string. Falls back to MEMBRANE_POSTGRES_DSN.

ListenAddrstringdefault: :9090

Daemon gRPC listen address.

DefaultSensitivitystringdefault: low

Default sensitivity assigned during capture.

GraphDefaultRootLimitintdefault: 10

Default graph root limit for RetrieveGraph.

GraphDefaultNodeLimitintdefault: 25

Default graph node limit for RetrieveGraph.

GraphDefaultEdgeLimitintdefault: 100

Default graph edge limit for RetrieveGraph.

GraphDefaultMaxHopsintdefault: 1

Default graph expansion depth.

SelectionConfidenceThresholdfloat64default: 0.7

Minimum selector confidence for competence and plan graph candidates.

EmbeddingEndpointstring

HTTP endpoint used to generate embeddings. Semantic search is disabled when empty.

LLMEndpointstring

HTTP endpoint used for LLM-backed semantic extraction during consolidation.

IngestLLMEnabledbooldefault: false

Enables ingest-side interpretation during CaptureMemory.

IngestLLMEndpointstring

HTTP endpoint used for ingest-side interpretation.

IngestLLMModelstring

Chat model name sent to the ingest-side interpretation endpoint.

IngestLLMAPIKeystring

Authentication key for the ingest-side interpretation endpoint. Falls back to MEMBRANE_INGEST_LLM_API_KEY.

APIKeystring

Bearer token for gRPC clients. Falls back to MEMBRANE_API_KEY.

EncryptionKeystring

SQLCipher key for SQLite. Falls back to MEMBRANE_ENCRYPTION_KEY.

YAML Example

backend: postgres
postgres_dsn: "postgres://user:pass@localhost:5432/membrane"
listen_addr: ":9090"
decay_interval: 1h
consolidation_interval: 6h
default_sensitivity: low
selection_confidence_threshold: 0.7
graph_default_root_limit: 10
graph_default_node_limit: 25
graph_default_edge_limit: 100
graph_default_max_hops: 1
embedding_endpoint: "https://api.openai.com/v1/embeddings"
embedding_model: "text-embedding-3-small"
embedding_dimensions: 1536
llm_endpoint: "https://api.openai.com/v1/chat/completions"
llm_model: "gpt-5-mini"
ingest_llm_enabled: true
ingest_llm_endpoint: "https://api.openai.com/v1/chat/completions"
ingest_llm_model: "gpt-5-mini"
rate_limit_per_second: 100