In the console notes under each section below show the exact route and buttons. For the raw wire contract see Control HTTP and Control gRPC.
The hierarchy
- Project — the top-level workspace. Maps 1:1 to a MuBit instance in hosted deployments.
- Agent Card — the configuration surface for one agent: identity + prompt + attached skills.
- PromptVersion — a single version of an agent’s system prompt. Exactly one is
activeat a time; others sit ascandidate,retired, orarchived. - SkillVersion — same lifecycle applied to a skill definition (parameters schema + instructions).
Projects
Python
JavaScript
Rust
In the console:
Projects tab in the left sidebar lists every project you can access. New Project opens the creation form; each row links to /app/projects/<pid> where you can rename, describe, or delete the project from its Settings tab.Agent Definitions
Every Agent Card you see in the console is anAgentDefinition row in the control plane.
Python
In the console: open a project → Agents tab. Each row links to the agent’s page at
/app/projects/<pid>/agents/<aid> with sub-tabs for Identity, Prompts, Skills, Runs, Memory, and Play. New Agent creates an AgentDefinition and mints the first active PromptVersion in one step.PromptVersion lifecycle
Prompts don’t just change — they version. Every write mints a newPromptVersion row, and exactly one version is active at any time.
Statuses
| Status | Meaning |
|---|---|
active | The version currently served to retrieval / inference |
candidate | Awaiting approval. Visible in the console’s “Pending Optimization” card; gets a diff view vs. active |
retired | Previously active, superseded by a newer version |
archived | Manually archived (won’t show up in default listings) |
Sources
| Source | How it was created |
|---|---|
manual | Written via set_prompt |
optimization | Minted by optimize_prompt — the control plane synthesises a candidate from recent outcomes |
rollback | Restored from a retired version |
Typical flow
Python
JavaScript
PromptVersion carries outcome aggregates (avg_outcome_score, outcome_count) so you can decide whether a candidate is actually an improvement before promoting it. See Prompt Optimization Lifecycle for the end-to-end workflow.
From the console
The same lifecycle is available without writing code. Open an agent’s Prompts tab at/app/projects/<pid>/agents/<aid>/prompts.
| You want to… | Where to click |
|---|---|
| Edit the active prompt by hand | Edit button on the Active System Prompt card → Save & Create Version (source: manual) |
| Ask the optimizer for a candidate | Suggest Optimization button (sparkles icon) on the same card. Creates a new row with status: candidate, source: optimization, auto-expanded to show the new prompt |
| Review a candidate’s diff against active | Review on the pending-candidate banner, or Compare in the Version History table → opens /app/projects/<pid>/agents/<aid>/compare/<vid> with a unified diff and the candidate’s optimization summary |
| Approve a candidate | Approve in the banner or Approve & Activate on the compare page. Flips candidate → active and previous active → retired atomically |
| Roll back | Find a retired row in Version History → Compare to confirm → approve that version. The activation is recorded with source: rollback |
The console uses the instance’s default optimizer model. If you need a specific provider/model/temperature for a single optimize run, call
client.optimize_prompt(..., llm_override={"provider": ..., "model": ..., "temperature": ...}) from the SDK — the console does not expose that override today.Skills
Skills are tools or playbooks attached to a project (and optionally bound to a specific agent). They version identically to prompts.Python
skill_type:
"tool"— a callable function with a parameters schema."playbook"— a longer-form text description of a procedure the agent should follow.
From the console
Skills follow the same candidate → active → retired flow as prompts, wrapped in a UI at/app/projects/<pid>/skills/<sid>:
| You want to… | Where to click |
|---|---|
| Edit Description, Parameters Schema, or Instructions by hand | Edit on the Active Definition card → three dedicated fields → Save & Create Version |
| Ask the optimizer for a candidate | Suggest Optimization on the same card. The diff view at /app/projects/<pid>/skills/<sid>/compare/<vid> shows changes across all three fields in one unified diff |
| Approve | Approve on the pending-candidate banner, or Approve & Activate on the compare page |
| Roll back | Pick a retired version from Version History → Compare → approve |
/app/projects/<pid>/skills; agent-scoped skills appear under the owning agent at /app/projects/<pid>/agents/<aid>/skills.
Run history per project
Every run your agents execute gets a row in the project’s run history with aggregates:lessons_extracted, prompt_changes, avg_outcome_score, outcome_count, ingest_count. Use this to track how an agent’s behavior evolves over time without querying raw memory.
Python
In the console: each agent has a Runs tab (
/app/projects/<pid>/agents/<aid>/runs) that renders the same run history with filters and drill-down. The project-level Logs tab (/app/projects/<pid>/logs) spans all agents in the project.Try a prompt or skill before shipping it
Every agent page has a Play tab at/app/projects/<pid>/agents/<aid>/play. It runs a real query against the project’s instance and shows the response, retrieved memory, and invoked skills — the same trace an SDK call would see. Use it as a smoke test after activating a new prompt or skill version.
Caveat: Play runs against whichever version is currently
active, so you can dry-run a freshly-activated version but not a still-candidate one. To pressure-test a candidate specifically, briefly activate it in a non-production project, or use the advanced-panel direct_bypass mode to skip prompt routing.When to use the resource model vs. raw run_id
| Use the resource model when… | Stick with raw run_id when… |
|---|---|
| You want to version prompts + skills and roll back | You’re prototyping against a single unmanaged endpoint |
| Multiple teammates or CI workflows need to coordinate on an agent’s config | Your agent config lives entirely in your application code |
| You want the console’s “Pending Optimization” review UX | You don’t need a human-in-the-loop approval step |
| You want per-project billing / observability boundaries | You only have one project and never expect to grow |
Related pages
- Prompt Optimization Lifecycle (recipe) — end-to-end: capture outcomes → optimize → diff → activate.
- Control HTTP — Managed resources — raw wire contract.
- Control gRPC — Managed resources — proto RPCs.
- Authentication — API keys authorise against a specific project’s instance.