Skip to main content
MuBit provides native adapters for popular agent frameworks. Each adapter plugs into the framework’s memory/store interface, so agents get persistent semantic memory, cross-session learning, and MAS coordination without changing how they’re built.

At a glance

FrameworkLanguageAdapter patternInstall
CrewAIPythonStorageBackend for unified Memorypip install mubit-crewai[crewai]
LangGraphPythonBaseStore with batch opspip install mubit-langgraph[langgraph]
LangGraphJSBaseStore with async opsnpm install @mubit-ai/langgraph
LangChainPythonBaseMemory / MubitChatMemorypip install mubit-langchain[langchain]
Google ADKPythonBaseMemoryService for Runnerpip install mubit-adk[adk]
AgnoPythonMemoryDb + Toolkitpip install mubit-agno[agno]
Vercel AI SDKJSwrapLanguageModel() middlewarenpm install @mubit-ai/ai-sdk
MCPAny10 tools over stdio transportnpm install @mubit-ai/mcp
All adapters use the canonical MuBit SDK transport internally and support the same MAS extensions: checkpoint, record_outcome, surface_strategies, register_agent, handoff, feedback, diagnose, reflect, lessons, archive, dereference.

CrewAI

Route CrewAI’s unified Memory system through MuBit. All agent observations persist across runs.
from mubit_crewai import MubitCrewMemory
from crewai import Crew, Agent, Task, Process

memory = MubitCrewMemory(api_key="mbt_...", session_id="crew-run-1")

crew = Crew(
    agents=[researcher, writer],
    tasks=[research_task, write_task],
    process=Process.sequential,
    memory=memory.as_crew_memory(),
)
result = crew.kickoff(inputs={"topic": "AI safety"})

# Extended MuBit features
memory.checkpoint("Research phase complete")
memory.record_outcome("task-1", "success")
memory.handoff("researcher", "writer", "Here are findings", requested_action="execute")
A 3-agent crew (classifier, researcher, responder) that processes customer support tickets. Agents learn from previous triage outcomes via MuBit memory.
support_triage/main.py
from crewai import Agent, Task, Crew, Process
from mubit_crewai import MubitCrewMemory

memory = MubitCrewMemory(
    endpoint="http://127.0.0.1:3000",
    api_key="mbt_...",
    session_id="triage-001",
    agent_id="crewai-triage",
)

# Register agents for MAS coordination
for agent_def in [
    {"agent_id": "classifier", "role": "ticket-classifier"},
    {"agent_id": "researcher", "role": "solution-researcher"},
    {"agent_id": "responder", "role": "response-drafter"},
]:
    memory.register_agent(**agent_def)

classifier = Agent(
    role="Support Ticket Classifier",
    goal="Classify tickets by severity and category",
    backstory="Experienced support lead who identifies severity and escalation triggers.",
    llm="openai/gpt-4o-mini",
)

researcher = Agent(
    role="Solution Researcher",
    goal="Find relevant past solutions from MuBit memory",
    backstory="Knowledge specialist who searches for similar tickets and resolution patterns.",
    llm="openai/gpt-4o-mini",
)

responder = Agent(
    role="Customer Response Drafter",
    goal="Draft empathetic, actionable replies",
    backstory="Senior success agent who turns frustrated customers into advocates.",
    llm="openai/gpt-4o-mini",
)

classify_task = Task(
    description="Classify this ticket: {ticket}",
    expected_output="Severity, category, key issues, escalation flags.",
    agent=classifier,
)
research_task = Task(
    description="Research solutions for: {ticket}",
    expected_output="Past cases, known solutions, systemic patterns.",
    agent=researcher,
)
respond_task = Task(
    description="Draft a response for: {ticket}",
    expected_output="Professional, empathetic customer response.",
    agent=responder,
)

crew = Crew(
    agents=[classifier, researcher, responder],
    tasks=[classify_task, research_task, respond_task],
    process=Process.sequential,
    memory=memory.as_crew_memory(),
)

result = crew.kickoff(inputs={"ticket": "Duplicate charge on Pro subscription..."})

# Post-run: handoffs, checkpoint, outcome
memory.handoff("classifier", "researcher", "Classification complete.", requested_action="continue")
memory.handoff("researcher", "responder", "Research complete.", requested_action="execute")
memory.checkpoint(snapshot="Triage complete.", label="triage-complete")
memory.record_outcome(reference_id="triage-001", outcome="success", rationale="Ticket resolved.")

LangGraph

Use MuBit as a persistent store for LangGraph StateGraphs. Each graph node can read/write memory via PutOp and SearchOp.
from mubit_langgraph import MubitStore
from langgraph.graph import StateGraph, START, END

store = MubitStore(api_key="mbt_...")
namespace = ("memories", "user-1", "session-1")

# In graph nodes, use the store directly
store.batch([PutOp(namespace=namespace, key="finding-1", value={"text": "...", "intent": "lesson"})])
results = store.batch([SearchOp(namespace_prefix=namespace, query="security issues", limit=5)])

# MAS extensions
store.register_agent(namespace, agent_id="reviewer", role="code-reviewer")
store.checkpoint(namespace, snapshot="Review complete")
store.handoff(namespace, from_agent_id="planner", to_agent_id="reviewer", content="...", requested_action="review")
A StateGraph with planner, reviewer loop, and summarizer. MuBit store persists findings across steps and sessions.
code_review/main.py
from langgraph.graph import StateGraph, START, END
from langgraph.store.base import PutOp, SearchOp
from mubit_langgraph import MubitStore

store = MubitStore(endpoint="http://127.0.0.1:3000", api_key="mbt_...")
NAMESPACE = ("memories", "code-reviewer", "review-session")

# Register agents
for agent_id, role in [("planner", "review-planner"), ("reviewer", "item-reviewer"), ("summarizer", "review-summarizer")]:
    store.register_agent(NAMESPACE, agent_id=agent_id, role=role)

def planner_node(state):
    # Search past review patterns
    past = store.batch([SearchOp(namespace_prefix=NAMESPACE, query="code review checklist", limit=3)])[0]
    # ... LLM call to generate checklist ...
    store.checkpoint(NAMESPACE, snapshot=f"Checklist created with {len(checklist)} items")
    return {"checklist": checklist, "current_idx": 0, "findings": []}

def reviewer_node(state):
    item = state["checklist"][state["current_idx"]]
    # ... LLM call to evaluate item ...
    store.batch([PutOp(namespace=NAMESPACE, key=f"finding-{state['current_idx']}", value={"text": finding, "intent": "lesson"})])
    return {"findings": state["findings"] + [finding], "current_idx": state["current_idx"] + 1}

def summarizer_node(state):
    context = store.get_context(NAMESPACE, query="code review findings", max_token_budget=4096)
    # ... LLM call to synthesize final review ...
    store.record_outcome(NAMESPACE, reference_id="review-001", outcome="success", rationale="Review completed.")
    return {"final_review": review}

graph = StateGraph(ReviewState)
graph.add_node("planner", planner_node)
graph.add_node("reviewer", reviewer_node)
graph.add_node("summarizer", summarizer_node)
graph.add_edge(START, "planner")
graph.add_edge("planner", "reviewer")
graph.add_conditional_edges("reviewer", should_continue, {"reviewer": "reviewer", "summarizer": "summarizer"})
graph.add_edge("summarizer", END)

result = graph.compile().invoke({"code_diff": "...", "checklist": [], "current_idx": 0, "findings": [], "final_review": ""})

LangChain

Drop-in BaseMemory for any LangChain chain. Automatically loads context before each call and saves interactions after.
from mubit_langchain import MubitChatMemory
from langchain_openai import ChatOpenAI

memory = MubitChatMemory(api_key="mbt_...", session_id="chat-1")
llm = ChatOpenAI(model="gpt-4o-mini")

# Manual conversation loop
context = memory.load_memory_variables({"input": "What happened yesterday?"})
# ... build messages with context["history"], call LLM ...
memory.save_context({"input": question}, {"output": response})
Cross-session memory works automatically — Session 2 retrieves relevant facts from Session 1 via MuBit’s semantic retrieval.
Multi-turn conversation with cross-session memory. Session 2 retrieves facts learned in Session 1.
research_assistant/main.py
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
from mubit_langchain import MubitChatMemory

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0.3)

# Session 1: Learn about a topic
memory_s1 = MubitChatMemory(api_key="mbt_...", session_id="research-s1", agent_id="research-assistant")

for question in ["What caused the 2008 financial crisis?", "How did subprime mortgages contribute?"]:
    context = memory_s1.load_memory_variables({"input": question})
    messages = [SystemMessage(content="You are a research assistant.")]
    if context.get("history"):
        messages.extend(context["history"])
    messages.append(HumanMessage(content=question))
    response = llm.invoke(messages)
    memory_s1.save_context({"input": question}, {"output": response.content})

# Session 2: New session, cross-session memory
memory_s2 = MubitChatMemory(api_key="mbt_...", session_id="research-s2", agent_id="research-assistant")

context = memory_s2.load_memory_variables({"input": "What about crisis prevention?"})
# Session 2 automatically retrieves relevant facts from Session 1

Google ADK

Plug MuBit into ADK’s Runner as a BaseMemoryService. All session events are automatically ingested; memory search enriches agent context.
from mubit_adk import MubitMemoryService
from google.adk.agents import LlmAgent, SequentialAgent
from google.adk.runners import Runner
from google.adk.sessions import InMemorySessionService

memory_service = MubitMemoryService(api_key="mbt_...")

agent = SequentialAgent(name="coordinator", sub_agents=[flight_agent, hotel_agent, planner_agent])
runner = Runner(agent=agent, app_name="travel", session_service=InMemorySessionService(), memory_service=memory_service)

# MAS extensions
await memory_service.checkpoint(app_name="travel", user_id="user-1", snapshot="Plan complete")
await memory_service.register_agent(user_id="user-1", agent_id="planner", role="itinerary")
SequentialAgent with Gemini, tool calling, and MAS coordination. Three agents (flight finder, hotel finder, itinerary planner) collaborate through MuBit memory.
travel_planner/main.py
from google.adk.agents import LlmAgent, SequentialAgent
from google.adk.runners import Runner
from google.adk.sessions import InMemorySessionService
from mubit_adk import MubitMemoryService

mubit_memory = MubitMemoryService(endpoint="http://127.0.0.1:3000", api_key="mbt_...")

flight_finder = LlmAgent(
    name="flight_finder", model="gemini-2.0-flash",
    description="Finds the best flights",
    instruction="Search for flights and recommend the best option.",
    tools=[search_flights], output_key="flight_results",
)
hotel_finder = LlmAgent(
    name="hotel_finder", model="gemini-2.0-flash",
    description="Finds accommodations",
    instruction="Search for hotels and recommend the best option.",
    tools=[search_hotels], output_key="hotel_results",
)
itinerary_planner = LlmAgent(
    name="itinerary_planner", model="gemini-2.0-flash",
    description="Creates day-by-day itinerary",
    instruction="Combine flight and hotel info into a complete travel plan.",
    output_key="final_itinerary",
)

coordinator = SequentialAgent(
    name="travel_coordinator",
    sub_agents=[flight_finder, hotel_finder, itinerary_planner],
)

runner = Runner(
    agent=coordinator, app_name="travel",
    session_service=InMemorySessionService(),
    memory_service=mubit_memory,
)

# Run the pipeline, then record MuBit operations
# await mubit_memory.checkpoint(...)
# await mubit_memory.register_agent(...)
# await mubit_memory.record_outcome(...)

Agno

Use MuBit as a persistent memory backend and toolkit for Agno agents. The adapter provides two integration surfaces: MemoryDb for Agno’s built-in memory system, and a Toolkit with LLM-callable tools for remember, recall, reflect, checkpoint, and diagnose.
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.memory.v2.memory import Memory
from mubit_agno import MubitAgnoMemory

mubit = MubitAgnoMemory(api_key="mbt_...", session_id="run-1")

agent = Agent(
    name="Assistant",
    model=OpenAIChat(id="gpt-4o"),
    memory=Memory(db=mubit.as_memory_db()),
    tools=[mubit.as_toolkit()],
    enable_agentic_memory=True,
)
result = agent.run("What do we know about the production database?")

# Extended MuBit features
mubit.checkpoint("Research done", "Completed infra review")
mubit.record_outcome("infra-recall", "success", rationale="Correct recall")
mubit.reflect()
mubit.archive("SELECT * FROM users WHERE active", "sql_query", labels=["infra"])
ref = mubit.dereference("ref_abc123")
An Agno agent that uses MuBit as its memory backend. Memories persist across sessions, enabling cross-conversation learning.
basic.py
import os
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from agno.memory.v2.memory import Memory
from mubit_agno import MubitAgnoMemory

endpoint = os.environ.get("MUBIT_ENDPOINT", "http://127.0.0.1:3000")
api_key = os.environ.get("MUBIT_API_KEY", "")

mubit = MubitAgnoMemory(
    endpoint=endpoint,
    api_key=api_key,
    session_id="basic-example",
    user_id="demo-user",
)

agent = Agent(
    name="Assistant",
    model=OpenAIChat(id="gpt-4o"),
    memory=Memory(db=mubit.as_memory_db()),
    tools=[mubit.as_toolkit()],
    enable_agentic_memory=True,
    instructions=[
        "You have access to MuBit memory tools.",
        "Use mubit_remember to store important information.",
        "Use mubit_recall to search for relevant memories.",
    ],
)

# Run 1: Store some knowledge
response = agent.run(
    "Remember that our production database is PostgreSQL 16 "
    "running on AWS RDS in us-east-1, and we use Redis for caching.",
    session_id="session-1",
)

# Checkpoint after learning
mubit.checkpoint("Initial learning", "Stored infrastructure facts")

# Run 2: Recall knowledge (cross-session)
response = agent.run(
    "What database do we use in production?",
    session_id="session-2",
)

# Record success
mubit.record_outcome(
    "infrastructure-recall", "success",
    rationale="Agent correctly recalled database details from memory",
)

# Check memory health
health = mubit.memory_health()

Vercel AI SDK

Middleware that wraps any AI SDK model with automatic memory injection and interaction capture.
import { wrapLanguageModel, generateText, tool } from "ai";
import { openai } from "@ai-sdk/openai";
import { mubitMemoryMiddleware } from "@mubit-ai/ai-sdk";

const model = wrapLanguageModel({
  model: openai("gpt-4o"),
  middleware: mubitMemoryMiddleware({
    apiKey: "mbt_...",
    sessionId: "session-1",
  }),
});

const { text } = await generateText({
  model,
  tools: { myTool },
  maxSteps: 3,
  prompt: "How do I reset my password?",
});
// Lessons auto-injected before call, interaction auto-captured after
Multi-session FAQ bot with knowledge-base tool and cross-session learning. Session 2 benefits from lessons captured in Session 1.
faq_bot/index.mjs
import { generateText, tool, wrapLanguageModel } from "ai";
import { openai } from "@ai-sdk/openai";
import { z } from "zod";
import { mubitMemoryMiddleware } from "@mubit-ai/ai-sdk";

const lookupKnowledgeBase = tool({
  description: "Search the knowledge base for help articles",
  parameters: z.object({ query: z.string() }),
  execute: async ({ query }) => findBestArticle(query),
});

// Session 1: Answer FAQ questions
const model1 = wrapLanguageModel({
  model: openai("gpt-4o-mini"),
  middleware: mubitMemoryMiddleware({
    sessionId: "faq-session-1",
    agentId: "faq-bot",
    mubitClient,
  }),
});

for (const question of ["How do I reset my password?", "What are system requirements?"]) {
  const { text } = await generateText({
    model: model1,
    tools: { lookupKnowledgeBase },
    maxSteps: 3,
    prompt: question,
  });
  console.log(`Q: ${question}\nA: ${text}`);
}

// Ingest a lesson between sessions
await mubitClient.ingest({
  session_id: "faq-session-1",
  text: "Users who ask about password reset often also need 2FA help.",
  intent: "lesson",
});

// Session 2: New session benefits from memory
const model2 = wrapLanguageModel({
  model: openai("gpt-4o-mini"),
  middleware: mubitMemoryMiddleware({ sessionId: "faq-session-2", agentId: "faq-bot", mubitClient }),
});

const { text } = await generateText({
  model: model2,
  tools: { lookupKnowledgeBase },
  maxSteps: 3,
  prompt: "I'm having trouble logging in",
});
// Response should reference password/2FA from Session 1

MCP

Expose MuBit as 10 tools over MCP stdio transport, usable by any MCP-compatible agent (Claude, Cursor, etc.).
npx @mubit-ai/mcp --api-key mbt_... --endpoint http://127.0.0.1:3000
Core memory tools: mubit_remember, mubit_recall, mubit_context, mubit_archive, mubit_dereference, mubit_reflect, mubit_lessons, mubit_forget, mubit_statusMAS and learning-loop tools: mubit_checkpoint, mubit_outcome, mubit_strategies, mubit_register_agent, mubit_list_agents, mubit_handoff, mubit_feedbackObservability tools: mubit_memory_health, mubit_diagnoseExample flows:
Checkpoint + reflection:     mubit_checkpoint → mubit_reflect → mubit_lessons
MAS coordination:            mubit_register_agent → mubit_handoff → mubit_feedback
Failure diagnosis:           mubit_diagnose → mubit_memory_health → mubit_recall

Common MAS extensions

All adapters expose these MuBit-specific methods beyond the base framework interface:
MethodPurpose
checkpoint()Save a snapshot of memory state
record_outcome()Record success/failure with RL-like signal
surface_strategies()Extract reusable strategy clusters from lessons
register_agent()Register agent with role, scopes, capabilities
handoff()Transfer control between agents with context
feedback()Submit feedback on a handoff
diagnose()Surface failure-path lessons for debugging
get_context()Fetch assembled context block with token budget
reflect()Extract lessons from session evidence
lessons()List lessons with optional filtering
archive()Store exact reusable artifacts with stable reference IDs
dereference()Fetch exact content by reference ID