MuBit SDKs expose three layers:
| Layer | What it does | When to use |
|---|
Learn (mubit.learn / learn) | Auto-ingest all LLM interactions + auto-inject lessons + auto-reflect | Zero-config closed-loop — agents learn with one line of setup |
Helpers on Client | 17 explicit methods for memory, context, reflection, multi-agent | Fine-grained control over what gets remembered and when |
Raw domains (auth, control, core) | 1:1 endpoint mappings | Wire debugging, async job polling, advanced routes |
Start with learn or helpers. Drop to raw domains only when you need exact payload control.
Learn module (closed-loop)
Before each LLM call, the learn module retrieves relevant lessons from MuBit and injects them into the system message. After the call, the interaction is ingested automatically. On run end, reflection extracts new lessons.
import mubit.learn
mubit.learn.init(api_key="mbt_...", agent_id="my-agent")
# All OpenAI/Anthropic/LiteLLM calls now auto-inject lessons + auto-ingest.
@mubit.learn.run(agent_id="planner", auto_reflect=True)
def plan_task(task):
return openai.OpenAI().chat.completions.create(
model="gpt-4o", messages=[{"role": "user", "content": task}],
).choices[0].message.content
import { learn, wrapOpenaiLearn } from "@mubit-ai/sdk/learn";
const runManager = learn.init({ apiKey: "mbt_...", agentId: "my-agent" });
// wrapOpenaiLearn(client, { learnConfig, lessonCache, learnClient, runManager });
const result = await learn.withRun({ agentId: "planner" }, async () => {
return client.chat.completions.create({ ... });
});
use mubit_sdk::learn::{LearnSession, LearnConfig};
let session = LearnSession::new(LearnConfig::from_env().agent_id("my-agent")).await;
let enriched = session.enrich_messages(&messages).await;
// Make LLM call with enriched...
session.record("response", "gpt-4o", 1500.0).await;
session.end().await;
Helper method bundles
| Use case | Methods | What they do |
|---|
| Basic memory | remember, recall | Ingest content with intent classification; semantic query with evidence scoring |
| Prompt context | getContext / get_context | Token-budgeted context block for LLM injection (rules → lessons → facts) |
| Exact artifacts | archive, archiveBlock / archive_block, dereference | Bit-exact storage with stable reference IDs; retrieval without semantic search |
| Run lifecycle | checkpoint, reflect, recordOutcome / record_outcome, recordStepOutcome / record_step_outcome | Durable state; LLM lesson extraction; reinforcement feedback; per-step process rewards |
| Multi-agent | registerAgent / register_agent, listAgents / list_agents, handoff, feedback | Scoped access per agent; task transfer |
| Diagnostics | memoryHealth / memory_health, diagnose, surfaceStrategies / surface_strategies, forget | Staleness metrics; error debugging; lesson clustering; deletion |
| Activity & audit | listActivity / list_activity, exportActivity / export_activity, appendActivity / append_activity | Chronological activity browse, JSONL export, manual trace append |
The learning loop
1. remember() → ingest facts, traces, lessons
2. record_step_outcome() → per-step process reward signals (optional, for dense RL)
3. reflect() → LLM extracts lessons from evidence
(auto-promotes recurring lessons: run → session → global)
4. get_context() → retrieve relevant lessons for the next call
5. record_outcome() → reinforce what worked at the run level
With learn, steps 1–3 happen automatically. With helpers, you orchestrate them yourself.
Current helper catalog
remember
recall
archive
archive_block
dereference
get_context
memory_health
diagnose
reflect
forget
checkpoint
register_agent
list_agents
record_outcome
record_step_outcome
surface_strategies
handoff
feedback
mubit.auto for automatic trace capture (ingestion only)
mubit.learn for closed-loop auto-inject + auto-ingest + auto-reflect (supports auto_extract=True for heuristic extraction)
remember
recall
archive
archiveBlock
dereference
getContext
memoryHealth
diagnose
reflect
forget
checkpoint
registerAgent
listAgents
recordOutcome
recordStepOutcome
surfaceStrategies
handoff
feedback
learn.init, learn.withRun, learn.startRun for closed-loop memory
remember
recall
archive
archive_block
dereference
get_context
memory_health
diagnose
reflect
forget
checkpoint
register_agent
list_agents
record_outcome
record_step_outcome
surface_strategies
handoff
feedback
learn::LearnSession with enrich_messages / record / end for closed-loop memory
Step-level outcomes
Record per-step process rewards for dense RL signal within a run. Use after each agentic step, then reflect with include_step_outcomes=True.
client.record_step_outcome(
step_id="tool_call_1",
step_name="search_api",
outcome="success",
signal=0.8,
rationale="Found the correct document on first try",
directive_hint="Keep using search before browsing",
)
# Later: reflect with step outcomes included
client.reflect(include_step_outcomes=True)
Lane-scoped memory
Lanes partition memory within a shared run so each agent sees only relevant entries.
# Ingest into a specific lane
client.remember(content="Planning output: task A depends on B", intent="fact", lane="planning")
# Query only the planning lane
result = client.recall(query="task dependencies", lane="planning")
# Register an agent with lane participation
client.register_agent(agent_id="planner", role="planner", shared_memory_lanes=["planning", "shared"])
lane (MAS memory isolation) is distinct from direct_lane (core data-plane retrieval routing). They serve different purposes and do not interact.
Step-wise reflection
Scope reflection to recent evidence or a specific step for incremental lesson extraction.
# Reflect over only the 5 most recent items
client.reflect(last_n_items=5)
# Reflect on a specific step, including step outcomes
client.reflect(step_id="tool_call_1", include_step_outcomes=True)
Enable heuristic extraction of rules, lessons, and facts from LLM responses without an extra LLM call.
mubit.learn.init(
api_key="mbt_...",
agent_id="my-agent",
auto_extract=True, # extract structured items from LLM responses
extraction_mode="heuristic", # no LLM call needed
)
When to use what
| Scenario | Use |
|---|
| Agents should learn with zero code | mubit.learn.init() (Python), learn.init() (JS), LearnSession::new() (Rust) |
| Passive trace capture only | mubit.auto.instrument() (Python only) |
| Control exactly what gets remembered | client.remember() + client.recall() |
| Token-budgeted context for prompts | client.get_context() / client.getContext() |
| Multiple agents with scoped access | client.register_agent() + client.handoff() |
| Bit-exact artifact storage | client.archive() + client.dereference() |
| Wire-level debugging | client.control.* / client.core.* |
When to use raw control methods directly
Use client.control.* when you need one of these explicitly:
control.ingest plus get_ingest_job job polling
control.batch_insert
- exact raw request/response debugging against HTTP or gRPC
- advanced or compatibility state-management routes
control.list_activity, control.export_activity, control.append_activity for audit trail
control.get_ingest_job, control.get_run_ingest_stats for job polling
control.list_runs, control.link_run, control.unlink_run, control.delete_run for run management
control.get_run_snapshot / control.contextSnapshot for full context snapshots
Activity and audit trail
Use client.control.listActivity() / list_activity() to browse chronological memory entries with scope and type filters. Use exportActivity() / export_activity() for JSONL export. Use appendActivity() / append_activity() to manually append activity traces.
# List recent activity for a run
activity = client.control.list_activity(run_id="my-run", limit=50)
# Export as JSONL
export = client.control.export_activity(run_id="my-run", format="jsonl")
# Append a manual activity trace
client.control.append_activity(run_id="my-run", entries=[
{"type": "observation", "content": "Agent restarted after timeout"}
])
const activity = await client.control.listActivity({ runId: "my-run", limit: 50 });
const exported = await client.control.exportActivity({ runId: "my-run", format: "jsonl" });
await client.control.appendActivity({ runId: "my-run", entries: [
{ type: "observation", content: "Agent restarted after timeout" }
]});
Ingest job tracking
Poll async ingest jobs and retrieve per-run ingest statistics.
# Get status of an ingest job
job = client.control.get_ingest_job(job_id="job-abc123")
# Get ingest stats for a run
stats = client.control.get_run_ingest_stats(run_id="my-run")
const job = await client.control.getIngestJob({ jobId: "job-abc123" });
const stats = await client.control.getRunIngestStats({ runId: "my-run" });
Run management
List, link, unlink, and delete runs.
# List recent runs
runs = client.control.list_runs()
# Link a child run to a parent
client.control.link_run(run_id="parent-run", linked_run_id="child-run")
# Unlink
client.control.unlink_run(run_id="parent-run", linked_run_id="child-run")
# Delete a run and its data
client.control.delete_run(run_id="old-run")
const runs = await client.control.listRuns();
await client.control.linkRun({ runId: "parent-run", linkedRunId: "child-run" });
await client.control.unlinkRun({ runId: "parent-run", linkedRunId: "child-run" });
await client.control.deleteRun({ runId: "old-run" });
Context snapshot
Retrieve a full context snapshot for a run, including working memory, attention state, and active goals.
snapshot = client.control.get_run_snapshot(run_id="my-run")
const snapshot = await client.control.contextSnapshot({ runId: "my-run" });
Temporal and quality features
Occurrence time
MuBit tracks two time dimensions for every memory entry: ingestion time (when the system learned it) and occurrence time (when the event actually happened). Set occurrence_time to record when an event happened, separate from when it was ingested.
import time
# Event happened 3 days ago, ingested now
client.remember(
content="New CI/CD pipeline reduced deployment time by 60%.",
intent="fact",
occurrence_time=int(time.time()) - 86400 * 3,
)
# Historical event from January 2025
client.remember(
content="Server migration to AWS completed with zero downtime.",
intent="fact",
occurrence_time=1736899200, # Jan 15 2025 UTC
)
// Event happened 3 days ago, ingested now
await client.remember({
content: "New CI/CD pipeline reduced deployment time by 60%.",
intent: "fact",
occurrence_time: Math.floor(Date.now() / 1000) - 86400 * 3,
});
// Historical event from January 2025
await client.remember({
content: "Server migration to AWS completed with zero downtime.",
intent: "fact",
occurrence_time: 1736899200, // Jan 15 2025 UTC
});
Temporal queries
Use min_timestamp and max_timestamp to filter evidence to a specific time window. The filter checks occurrence_time first, falling back to ingestion time.
# "What happened in January 2025?"
results = client.recall(
query="What technical changes were made?",
min_timestamp=1735689600, # Jan 1 2025
max_timestamp=1738367999, # Jan 31 2025
)
for evidence in results["evidence"]:
print(f" {evidence['content'][:80]}")
const results = await client.recall({
query: "What technical changes were made?",
min_timestamp: 1735689600,
max_timestamp: 1738367999,
});
results.evidence.forEach(e => console.log(e.content.slice(0, 80)));
Without temporal bounds, queries like “What happened last week?” use natural language temporal intent detection and prioritize entries by occurrence time in the recency ranking.
Search budget
The budget parameter controls the depth of retrieval. Use "low" for real-time agents and "high" for accuracy-critical offline analysis.
| Budget | Behavior | Typical latency |
|---|
"low" | Fewer candidates, skip deep traversal | < 500ms |
"mid" | Standard retrieval (default) | 500ms–2s |
"high" | More candidates, deeper graph traversal | 1–5s |
# Fast retrieval for a real-time chatbot
fast = client.recall(query="user question", budget="low")
# Deep retrieval for a research report
deep = client.recall(query="comprehensive analysis topic", budget="high")
// Fast
const fast = await client.recall({ query: "user question", budget: "low" });
// Deep
const deep = await client.recall({ query: "analysis topic", budget: "high" });
Staleness detection
When a newer fact contradicts an older one, MuBit marks the older entry as stale and deprioritizes it in ranking. The staleness metadata is available in evidence responses.
results = client.recall(query="Where is the office?")
for evidence in results["evidence"]:
stale = evidence.get("is_stale", False)
status = " [STALE]" if stale else ""
print(f" {evidence['content'][:60]}{status}")
const results = await client.recall({ query: "Where is the office?" });
results.evidence.forEach(e => {
const status = e.is_stale ? " [STALE]" : "";
console.log(` ${e.content.slice(0, 60)}${status}`);
});
Stale entries are still returned for transparency. The ranking penalty ensures they appear below the current fact. Filter them out in your application if you only want current information.
Mental models
The mental_model entry type stores consolidated entity summaries that are prioritized over raw facts in context assembly. Use this for entities your agent tracks over time.
client.remember(
content="Alice Chen is a senior engineer specializing in distributed systems. "
"She prefers async communication and reviews PRs within 24 hours.",
intent="mental_model",
metadata={"entity": "alice chen", "consolidated": True},
)
await client.remember({
content: "Alice Chen is a senior engineer specializing in distributed systems. "
+ "She prefers async communication and reviews PRs within 24 hours.",
intent: "mental_model",
metadata: { entity: "alice chen", consolidated: true },
});
Mental models are returned with higher priority than individual facts in recall() and get_context(). Update them periodically as your agent learns more about an entity.
Failure modes and troubleshooting
| Symptom | Root cause | Fix |
|---|
| SDK usage becomes inconsistent across teams | Raw and helper paths mixed arbitrarily | Set helpers as the default integration contract |
| Debugging a route contract is awkward | Helper layer hides wire details | Use the raw client.control.* call for that investigation |
| Docs and examples drift from SDK reality | Helpers undocumented | Treat the top-level helper surface as the public default |
Next steps