Skip to main content
On a Managed MuBit deployment (MuBit-hosted or self-hosted with the control plane), agents aren’t just transient run-time identities — they’re first-class resources. A Project groups related agents, each agent owns a versioned system prompt and a set of skills, and everything can be inspected, rolled out, and rolled back without touching your deployed SDK code. This page is the SDK view of that resource model. Every operation below is also available point-and-click in the Mubit Console at console.mubit.ai — inline In the console notes under each section below show the exact route and buttons. For the raw wire contract see Control HTTP and Control gRPC.

The hierarchy

Project
├── Agent Card          (name, role, description, active system prompt)
│   ├── PromptVersion*  (one active; N candidates awaiting approval)
│   └── Skill*          (tools/playbooks attached to the agent)
│       └── SkillVersion*   (candidate → active lifecycle)
└── Run history         (outcomes aggregated per run)
  • Project — the top-level workspace. Maps 1:1 to a MuBit instance in hosted deployments.
  • Agent Card — the configuration surface for one agent: identity + prompt + attached skills.
  • PromptVersion — a single version of an agent’s system prompt. Exactly one is active at a time; others sit as candidate, retired, or archived.
  • SkillVersion — same lifecycle applied to a skill definition (parameters schema + instructions).

Projects

Python
# List projects accessible to this API key
projects = client.list_projects()

# Create a project
project = client.create_project(
    name="triage-demo",
    description="Customer-support triage pilot",
)
project_id = project["project"]["project_id"]

# Update / delete
client.update_project(project_id=project_id, description="Live now")
client.delete_project(project_id=project_id)  # tears down the backing instance
JavaScript
const { project } = await client.createProject({
  name: "triage-demo",
  description: "Customer-support triage pilot",
});
const projectId = project.project_id;

await client.updateProject({ project_id: projectId, description: "Live now" });
await client.deleteProject({ project_id: projectId });
Rust
let resp = client.create_project(json!({
    "name": "triage-demo",
    "description": "Customer-support triage pilot"
})).await?;
let project_id = resp["project"]["project_id"].as_str().unwrap();
In the console: Projects tab in the left sidebar lists every project you can access. New Project opens the creation form; each row links to /app/projects/<pid> where you can rename, describe, or delete the project from its Settings tab.

Agent Definitions

Every Agent Card you see in the console is an AgentDefinition row in the control plane.
Python
# Create an agent inside a project
agent = client.create_agent_definition(
    project_id=project_id,
    agent_id="triage",
    role="customer triage agent",
    description="Routes escalations and captures recurring issues",
    system_prompt_content="You are a concise, empathetic triage agent...",
)

# List agents in a project
agents = client.list_agent_definitions(project_id=project_id)

# Update role / description / prompt
client.update_agent_definition(
    project_id=project_id,
    agent_id="triage",
    description="Now also handles billing escalations",
)
The system_prompt_content you pass on creation becomes the first active PromptVersion for the agent. Subsequent changes flow through set_prompt (manual) or optimize_prompt (LLM-generated candidate).
In the console: open a project → Agents tab. Each row links to the agent’s page at /app/projects/<pid>/agents/<aid> with sub-tabs for Identity, Prompts, Skills, Runs, Memory, and Play. New Agent creates an AgentDefinition and mints the first active PromptVersion in one step.

PromptVersion lifecycle

Prompts don’t just change — they version. Every write mints a new PromptVersion row, and exactly one version is active at any time.

Statuses

StatusMeaning
activeThe version currently served to retrieval / inference
candidateAwaiting approval. Visible in the console’s “Pending Optimization” card; gets a diff view vs. active
retiredPreviously active, superseded by a newer version
archivedManually archived (won’t show up in default listings)

Sources

SourceHow it was created
manualWritten via set_prompt
optimizationMinted by optimize_prompt — the control plane synthesises a candidate from recent outcomes
rollbackRestored from a retired version

Typical flow

Python
# Manual edit — activates immediately
client.set_prompt(
    agent_id="triage",
    content="Updated system prompt...",
    activate=True,
)

# Fetch current active prompt
current = client.get_prompt(agent_id="triage")

# List all versions for an agent
versions = client.list_prompt_versions(agent_id="triage")

# Ask the control plane to propose a candidate from recent outcomes
candidate = client.optimize_prompt(
    agent_id="triage",
    project_id=project_id,
    llm_override={"provider": "anthropic", "model": "claude-sonnet-4-6"},
)
# Returns: { success, candidate, optimization_summary, confidence, activated }

# Compare two versions (returns unified diff_text)
diff = client.get_prompt_diff(
    agent_id="triage",
    version_a_id=current["prompt"]["version_id"],
    version_b_id=candidate["candidate"]["version_id"],
)

# Approve the candidate
client.activate_prompt_version(
    agent_id="triage",
    version_id=candidate["candidate"]["version_id"],
)
JavaScript
await client.setPrompt({ agent_id: "triage", content: "...", activate: true });
const versions = await client.listPromptVersions({ agent_id: "triage" });
const candidate = await client.optimizePrompt({
  agent_id: "triage",
  project_id: projectId,
});
await client.activatePromptVersion({
  agent_id: "triage",
  version_id: candidate.candidate.version_id,
});
Each PromptVersion carries outcome aggregates (avg_outcome_score, outcome_count) so you can decide whether a candidate is actually an improvement before promoting it. See Prompt Optimization Lifecycle for the end-to-end workflow.

From the console

The same lifecycle is available without writing code. Open an agent’s Prompts tab at /app/projects/<pid>/agents/<aid>/prompts.
You want to…Where to click
Edit the active prompt by handEdit button on the Active System Prompt card → Save & Create Version (source: manual)
Ask the optimizer for a candidateSuggest Optimization button (sparkles icon) on the same card. Creates a new row with status: candidate, source: optimization, auto-expanded to show the new prompt
Review a candidate’s diff against activeReview on the pending-candidate banner, or Compare in the Version History table → opens /app/projects/<pid>/agents/<aid>/compare/<vid> with a unified diff and the candidate’s optimization summary
Approve a candidateApprove in the banner or Approve & Activate on the compare page. Flips candidate → active and previous active → retired atomically
Roll backFind a retired row in Version History → Compare to confirm → approve that version. The activation is recorded with source: rollback
The console uses the instance’s default optimizer model. If you need a specific provider/model/temperature for a single optimize run, call client.optimize_prompt(..., llm_override={"provider": ..., "model": ..., "temperature": ...}) from the SDK — the console does not expose that override today.

Skills

Skills are tools or playbooks attached to a project (and optionally bound to a specific agent). They version identically to prompts.
Python
# Create a tool skill
skill = client.create_skill(
    project_id=project_id,
    agent_id="triage",             # optional — omit for project-level shared skill
    name="send-email",
    description="Send a templated customer email",
    parameters_schema='{"type":"object","properties":{"to":{"type":"string"},"template":{"type":"string"}}}',
    instructions="Use for routine replies. Do not use for escalations.",
    skill_type="tool",             # or "playbook"
)

# List skills in a project
skills = client.list_skills(project_id=project_id)

# Ask the control plane to propose an improved version
candidate = client.optimize_skill(
    project_id=project_id,
    skill_id=skill["skill"]["skill_id"],
)

# Diff + activate just like prompts
diff = client.get_skill_diff(
    skill_id=skill["skill"]["skill_id"],
    version_a_id="...",
    version_b_id="...",
)
client.activate_skill_version(
    skill_id=skill["skill"]["skill_id"],
    version_id=candidate["candidate"]["version_id"],
)
skill_type:
  • "tool" — a callable function with a parameters schema.
  • "playbook" — a longer-form text description of a procedure the agent should follow.

From the console

Skills follow the same candidate → active → retired flow as prompts, wrapped in a UI at /app/projects/<pid>/skills/<sid>:
You want to…Where to click
Edit Description, Parameters Schema, or Instructions by handEdit on the Active Definition card → three dedicated fields → Save & Create Version
Ask the optimizer for a candidateSuggest Optimization on the same card. The diff view at /app/projects/<pid>/skills/<sid>/compare/<vid> shows changes across all three fields in one unified diff
ApproveApprove on the pending-candidate banner, or Approve & Activate on the compare page
Roll backPick a retired version from Version History → Compare → approve
Shared project-level skills live under /app/projects/<pid>/skills; agent-scoped skills appear under the owning agent at /app/projects/<pid>/agents/<aid>/skills.

Run history per project

Every run your agents execute gets a row in the project’s run history with aggregates: lessons_extracted, prompt_changes, avg_outcome_score, outcome_count, ingest_count. Use this to track how an agent’s behavior evolves over time without querying raw memory.
Python
runs = client.list_run_history(project_id=project_id, limit=50)
run = client.get_run_history(project_id=project_id, run_id="run-abc")
In the console: each agent has a Runs tab (/app/projects/<pid>/agents/<aid>/runs) that renders the same run history with filters and drill-down. The project-level Logs tab (/app/projects/<pid>/logs) spans all agents in the project.

Try a prompt or skill before shipping it

Every agent page has a Play tab at /app/projects/<pid>/agents/<aid>/play. It runs a real query against the project’s instance and shows the response, retrieved memory, and invoked skills — the same trace an SDK call would see. Use it as a smoke test after activating a new prompt or skill version.
Caveat: Play runs against whichever version is currently active, so you can dry-run a freshly-activated version but not a still-candidate one. To pressure-test a candidate specifically, briefly activate it in a non-production project, or use the advanced-panel direct_bypass mode to skip prompt routing.

When to use the resource model vs. raw run_id

Use the resource model when…Stick with raw run_id when…
You want to version prompts + skills and roll backYou’re prototyping against a single unmanaged endpoint
Multiple teammates or CI workflows need to coordinate on an agent’s configYour agent config lives entirely in your application code
You want the console’s “Pending Optimization” review UXYou don’t need a human-in-the-loop approval step
You want per-project billing / observability boundariesYou only have one project and never expect to grow