Metrics
GetMetrics
Returns a point-in-time snapshot of the memory substrate's health and behavioural metrics. The response payload is a google.protobuf.Value containing the snapshot object.
This method takes no input parameters:
message GetMetricsRequest {}
message MetricsResponse {
google.protobuf.Value snapshot = 1;
}
Response fields
collected_atstringRFC 3339 timestamp when the snapshot was collected.
total_recordsnumberTotal number of records in the store across all memory types.
records_by_typeobjectCount of records broken down by memory type.
avg_saliencenumberMean salience across all records. Range [0, 1].
avg_confidencenumberMean confidence across all records. Range [0, 1].
salience_distributionobjectCount of records in each salience bucket.
active_recordsnumberNumber of records with salience greater than 0.
pinned_recordsnumberNumber of records marked as pinned (exempt from automatic decay and pruning).
total_audit_entriesnumberTotal number of audit log entries across all records.
memory_growth_ratenumberFraction of records created in the last 24 hours: recent_records / total_records. Indicates how rapidly new memory is being accumulated.
retrieval_usefulnessnumberRatio of reinforce audit actions to total audit entries: reinforce_count / total_audit_entries. Measures how often retrieved records are marked as useful.
competence_success_ratenumberAverage success_rate across all competence records that have performance data. Indicates how reliably the agent's learned procedures succeed.
plan_reuse_frequencynumberAverage execution_count across all plan graph records that have metrics. Higher values indicate plans are being discovered and reused rather than recreated.
revision_ratenumberFraction of audit entries that are revisions (revise, fork, or merge actions): revision_count / total_audit_entries. Indicates how actively knowledge is being updated.
Metric descriptions
| Metric | Description |
|---|---|
memory_growth_rate | Fraction of records created in the last 24 hours |
retrieval_usefulness | Ratio of reinforce actions to total audit entries |
competence_success_rate | Average success rate across competence records |
plan_reuse_frequency | Average execution count across plan graph records |
revision_rate | Fraction of audit entries that are revisions (supersede, fork, merge) |
Example snapshot
{
"collected_at": "2026-02-05T14:23:10Z",
"total_records": 160,
"records_by_type": {
"episodic": 80,
"entity": 18,
"semantic": 35,
"competence": 15,
"plan_graph": 7,
"working": 5
},
"avg_salience": 0.62,
"avg_confidence": 0.78,
"salience_distribution": {
"0.0-0.2": 14,
"0.2-0.4": 22,
"0.4-0.6": 34,
"0.6-0.8": 50,
"0.8-1.0": 40
},
"active_records": 148,
"pinned_records": 3,
"total_audit_entries": 890,
"memory_growth_rate": 0.15,
"retrieval_usefulness": 0.42,
"competence_success_rate": 0.85,
"plan_reuse_frequency": 2.3,
"revision_rate": 0.08
}
Calling GetMetrics
snap, err := m.GetMetrics(ctx)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Total records: %d\n", snap.TotalRecords)
fmt.Printf("Avg salience: %.2f\n", snap.AvgSalience)
fmt.Printf("Retrieval usefulness: %.2f\n", snap.RetrievalUsefulness)
fmt.Printf("Competence success rate: %.2f\n", snap.CompetenceSuccessRate)
grpcurl \
-H 'authorization: Bearer your-key' \
-d '{}' \
-plaintext \
localhost:9090 \
membrane.v1.MembraneService/GetMetrics
const snapshot = await client.getMetrics();
console.log(`Total records: ${snapshot.total_records}`);
console.log(`Avg salience: ${snapshot.avg_salience}`);
console.log(`Retrieval usefulness: ${snapshot.retrieval_usefulness}`);
snapshot = client.get_metrics()
print(f"Total records: {snapshot['total_records']}")
print(f"Avg salience: {snapshot['avg_salience']:.2f}")
print(f"Retrieval usefulness: {snapshot['retrieval_usefulness']:.2f}")
Poll GetMetrics periodically and alert when retrieval_usefulness drops below a threshold or revision_rate spikes unexpectedly. These two metrics are the most direct indicators of whether the agent is learning effectively.