Skip to main content

Getting Started

MuBit gives your AI agents persistent memory — facts, lessons, and rules that survive across sessions and improve over time. The fastest way to start is mubit.learn, which auto-instruments your LLM calls with zero configuration.

Prerequisites

.env
MUBIT_API_KEY="mbt_<instance>_<key_id>_<secret>"
Optional endpoint and transport overrides:
.env
MUBIT_ENDPOINT="https://api.mubit.ai"
MUBIT_HTTP_ENDPOINT="https://api.mubit.ai"
MUBIT_GRPC_ENDPOINT="grpc.api.mubit.ai:443"
MUBIT_TRANSPORT="auto"  # "auto" (default), "http", or "grpc"
See SDK Configuration Reference for the full list of environment variables and their resolution order.

Install

pip
pip install mubit-sdk

Quickstart with mubit.learn

Two lines of setup. Your LLM calls automatically get lesson injection, interaction capture, and reflection.
learn_quickstart.py
import os
import mubit.learn
import openai

# One-time setup — all LLM calls now auto-inject lessons and auto-capture.
mubit.learn.init(api_key=os.environ["MUBIT_API_KEY"], agent_id="support-agent")

# Use your LLM client as normal. MuBit handles the rest.
response = openai.OpenAI().chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "What update style does Taylor want?"}],
)
print(response.choices[0].message.content)

# For run-scoped learning with automatic reflection on completion:
@mubit.learn.run(agent_id="support-agent", auto_reflect=True)
def handle_ticket(question):
    return openai.OpenAI().chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": question}],
    ).choices[0].message.content
What this does automatically:
  • Before each LLM call: retrieves relevant lessons from MuBit and injects them into the system message
  • After each call: ingests the interaction as memory
  • On run end: reflects to extract new lessons and promotes recurring ones

Works with any LLM provider

mubit.learn auto-instruments calls to these libraries — no wrapper code needed:
ProviderPythonNode.js
OpenAIopenaiopenai
Anthropicanthropic@anthropic-ai/sdk
Google Geminigoogle-generativeai@google/generative-ai
LiteLLMlitellm
Vercel AI SDKai (via @mubit-ai/ai-sdk middleware)

Advanced integration options

When you need more control over what gets stored and when, or you’re using an agent framework with its own memory interface, use these integration paths.

SDK helpers

Use explicit helper methods for fine-grained control over memory, context assembly, and the learning loop.
getting_started.py
import os
from mubit import Client

run_id = "support:acme:ticket-42"
client = Client(
    api_key=os.environ["MUBIT_API_KEY"],
    run_id=run_id,
    transport=os.getenv("MUBIT_TRANSPORT", "auto"),
)

client.remember(
    session_id=run_id,
    agent_id="support-agent",
    content="Customer Taylor prefers concise Friday updates.",
    intent="fact",
    metadata={"customer": "taylor", "source": "quickstart"},
)

answer = client.recall(
    session_id=run_id,
    query="What update style does Taylor want?",
    entry_types=["fact", "lesson", "rule"],
)

context = client.get_context(
    session_id=run_id,
    query="Draft the next customer update.",
    mode="summary",
    max_token_budget=300,
)

print(answer.get("final_answer"))
print(context.get("section_summaries", []))

What this demonstrates

  • remember() is the default write path for single logical memory items.
  • recall() is the default answer-oriented retrieval path.
  • getContext() / get_context() assembles a reusable context block before you call your LLM.
  • archive() and dereference() are the exact-reference pair for artifacts you want to recover later without semantic drift.
  • You only need raw client.control.* methods when you want explicit control over async ingest jobs or raw wire payloads.

Exact-reference quick start

Use exact references when semantic discovery is not enough and a later step needs the exact stored artifact back.
archived = client.archive(
    session_id=run_id,
    agent_id="support-agent",
    content="Original billing diff and remediation note",
    artifact_kind="billing_postmortem",
    labels=["billing", "exact"],
)

exact = client.dereference(
    session_id=run_id,
    reference_id=archived["reference_id"],
)

Framework integrations

If you’re using an agent framework, MuBit provides native adapters that plug into the framework’s own memory interface:
FrameworkLanguageInstallPattern
CrewAIPythonpip install mubit-crewai[crewai]StorageBackend for unified Memory
LangGraphPython/JSpip install mubit-langgraph[langgraph]BaseStore adapter
LangChainPythonpip install mubit-langchain[langchain]BaseMemory subclass
Google ADKPythonpip install mubit-adk[adk]BaseMemoryService adapter
Vercel AI SDKJSnpm install @mubit-ai/ai-sdkwrapLanguageModel() middleware
MCPAnynpm install @mubit-ai/mcp10 tools over stdio
Quick examples:
CrewAI
from mubit_crewai import MubitCrewMemory
memory = MubitCrewMemory(api_key="mbt_...", session_id="crew-run-1")
crew = Crew(agents=[...], tasks=[...], memory=memory.as_crew_memory())
LangChain
from mubit_langchain import MubitChatMemory
memory = MubitChatMemory(api_key="mbt_...", session_id="chat-1")
# Use with any chain: memory.load_memory_variables(), memory.save_context()
LangGraph
from mubit_langgraph import MubitStore
store = MubitStore(api_key="mbt_...")
graph.compile(store=store)
Google ADK
from mubit_adk import MubitMemoryService
memory_service = MubitMemoryService(api_key="mbt_...")
runner = Runner(agent=root, memory_service=memory_service)
Vercel AI SDK
import { mubitMemoryMiddleware } from "@mubit-ai/ai-sdk";
const model = wrapLanguageModel({
  model: openai("gpt-4o"),
  middleware: mubitMemoryMiddleware({ apiKey: "mbt_..." }),
});
See Framework Integrations for full documentation and example apps.

What to do next