MCP context (vibe-kanban) (#1185)
* **MCP Context Wiring** - Added a reusable context payload and env constant so every attempt can describe itself as JSON (`crates/utils/src/vk_mcp_context.rs:1`). - Threaded that payload through executor launches via a new `ExecutorRuntimeContext`, pushing `VK_MCP_CONTEXT_JSON` into each agent process (e.g. `crates/executors/src/actions/mod.rs:20`, `crates/executors/src/executors/claude.rs:158`, `crates/executors/src/executors/acp/harness.rs:60`). - When an attempt starts we now build the context blob from project/task/attempt metadata and pass it to every execution (initial, follow-up, cleanup) (`crates/local-deployment/src/container.rs:823`). - The MCP task server reads the env, keeps the parsed struct, and only exposes a new `get_vk_context` tool when the blob is present; instructions call it out conditionally (`crates/server/src/mcp/task_server.rs:240`, `crates/server/src/mcp/task_server.rs:397`, `crates/server/src/mcp/task_server.rs:622`). Tests: `cargo check` You may want to sanity-test an executor path (e.g., start a Claude attempt and invoke `get_vk_context`) to see the live payload returned. - Kept the prompt plumbing lean: `ExecutorRuntimeContext` is now just an optional JSON string, and coding actions hand it to each agent via the new `use_runtime_context` hook instead of changing every `spawn` signature (`crates/executors/src/actions/mod.rs:20`, `crates/executors/src/actions/coding_agent_initial.rs:33`). - Each coding executor caches that string and injects `VK_MCP_CONTEXT_JSON` when it actually builds the `Command`, so only MCP-aware runs see the env; the harness got a light touch so Gemini/Qwen can pass the optional blob too (`crates/executors/src/executors/claude.rs:188`, `crates/executors/src/executors/amp.rs:36`, `crates/executors/src/executors/acp/harness.rs:46`). - Container still assembles the per-attempt metadata, but now it just builds the JSON once and hands it along; the MCP server logic from before remains untouched. Tests: `cargo check` If you want to test end-to-end, spin up an attempt with the VK MCP server enabled and call the new `get_vk_context` tool—you should see the env-driven payload without any prompt changes. - Trimmed MCP context payload to only the essentials: just project id/name, task id/title, attempt id/branch/target branch, plus execution id and executor (`crates/utils/src/vk_mcp_context.rs:9`, `crates/local-deployment/src/container.rs:832`). - Dropped the fields you called out (repo path, description, status, worktree path, backend URL) so agents can’t wander outside the worktree or get flooded with long text (`crates/local-deployment/src/container.rs:834`). - Ran `cargo fmt` and `cargo check` to confirm the lighter struct compiles cleanly. All cleaned up—let me know if you want any other context tweaks. - Trimmed the MCP payload helpers to the bare minimum and added a simple `VkMcpContext::new` constructor for building the JSON without extra wrappers (`crates/utils/src/vk_mcp_context.rs:1`). - Local container now invokes that constructor and serializes directly with `serde_json::to_string`, no stray repo paths/status/URLs left (`crates/local-deployment/src/container.rs:836`). - MCP server reads the env blob with `serde_json::from_str`, avoiding the removed helper (`crates/server/src/mcp/task_server.rs:249`). `cargo check` You’re set to retest—context only contains the trimmed fields, and the code paths compile clean. - Simplified the MCP context to only include project/task/attempt IDs and branch info, with a single `VkMcpContext::new` helper (`crates/utils/src/vk_mcp_context.rs:7`). - Container now builds the context directly from the execution/task data without extra lookups or name fields, then serializes via `serde_json::to_string` (`crates/local-deployment/src/container.rs:826`). - MCP server reuses `serde_json::from_str` to hydrate the env payload—no custom helpers needed (`crates/server/src/mcp/task_server.rs:250`). `cargo check` All review notes addressed; context stays minimal and derived straight from the execution context. - Context constructor now takes everything from the `ExecutionContext` itself—no extra project fetch or manual process ID threading—so we pass `execution_ctx.execution_process.id` (`crates/local-deployment/src/container.rs:826`). - `VkMcpContext::new` remains the lightweight helper; no other fields changed. - `cargo check` still green. That should satisfy the review comment; let me know if you want the helper to move inside `ExecutionContext` itself later. You’re right to call this out. The problem is that I kept reconstructing the MCP payload by pulling individual fields out of `execution_ctx` instead of passing the whole context through a helper. That meant I was effectively duplicating the same data extraction in several spots, which is exactly what you asked me to avoid. Even after I added `VkMcpContext::new`, I was still feeding it a pile of individual values and even chasing the project row myself. The fix is straightforward: provide a helper that takes the database `ExecutionContext` (already returned by `ExecutionProcess::load_context`) and use that directly. I’ve added `VkMcpContext::from_execution(&execution_ctx)` and switched the container to call it, so we call `serde_json::to_string(VkMcpContext::from_execution(&execution_ctx))`. No extra project lookups, no manual argument lists, and the helper itself is exactly where you expect it. Let me know if you want to drop the old `new(...)` constructor now that the context-based helper exists. Simplify rebase fixes * **MCP Context Wiring** - Added a reusable context payload and env constant so every attempt can describe itself as JSON (`crates/utils/src/vk_mcp_context.rs:1`). - Threaded that payload through executor launches via a new `ExecutorRuntimeContext`, pushing `VK_MCP_CONTEXT_JSON` into each agent process (e.g. `crates/executors/src/actions/mod.rs:20`, `crates/executors/src/executors/claude.rs:158`, `crates/executors/src/executors/acp/harness.rs:60`). - When an attempt starts we now build the context blob from project/task/attempt metadata and pass it to every execution (initial, follow-up, cleanup) (`crates/local-deployment/src/container.rs:823`). - The MCP task server reads the env, keeps the parsed struct, and only exposes a new `get_vk_context` tool when the blob is present; instructions call it out conditionally (`crates/server/src/mcp/task_server.rs:240`, `crates/server/src/mcp/task_server.rs:397`, `crates/server/src/mcp/task_server.rs:622`). Tests: `cargo check` You may want to sanity-test an executor path (e.g., start a Claude attempt and invoke `get_vk_context`) to see the live payload returned. - Kept the prompt plumbing lean: `ExecutorRuntimeContext` is now just an optional JSON string, and coding actions hand it to each agent via the new `use_runtime_context` hook instead of changing every `spawn` signature (`crates/executors/src/actions/mod.rs:20`, `crates/executors/src/actions/coding_agent_initial.rs:33`). - Each coding executor caches that string and injects `VK_MCP_CONTEXT_JSON` when it actually builds the `Command`, so only MCP-aware runs see the env; the harness got a light touch so Gemini/Qwen can pass the optional blob too (`crates/executors/src/executors/claude.rs:188`, `crates/executors/src/executors/amp.rs:36`, `crates/executors/src/executors/acp/harness.rs:46`). - Container still assembles the per-attempt metadata, but now it just builds the JSON once and hands it along; the MCP server logic from before remains untouched. Tests: `cargo check` If you want to test end-to-end, spin up an attempt with the VK MCP server enabled and call the new `get_vk_context` tool—you should see the env-driven payload without any prompt changes. - Trimmed MCP context payload to only the essentials: just project id/name, task id/title, attempt id/branch/target branch, plus execution id and executor (`crates/utils/src/vk_mcp_context.rs:9`, `crates/local-deployment/src/container.rs:832`). - Dropped the fields you called out (repo path, description, status, worktree path, backend URL) so agents can’t wander outside the worktree or get flooded with long text (`crates/local-deployment/src/container.rs:834`). - Ran `cargo fmt` and `cargo check` to confirm the lighter struct compiles cleanly. All cleaned up—let me know if you want any other context tweaks. - Trimmed the MCP payload helpers to the bare minimum and added a simple `VkMcpContext::new` constructor for building the JSON without extra wrappers (`crates/utils/src/vk_mcp_context.rs:1`). - Local container now invokes that constructor and serializes directly with `serde_json::to_string`, no stray repo paths/status/URLs left (`crates/local-deployment/src/container.rs:836`). - MCP server reads the env blob with `serde_json::from_str`, avoiding the removed helper (`crates/server/src/mcp/task_server.rs:249`). `cargo check` You’re set to retest—context only contains the trimmed fields, and the code paths compile clean. - Simplified the MCP context to only include project/task/attempt IDs and branch info, with a single `VkMcpContext::new` helper (`crates/utils/src/vk_mcp_context.rs:7`). - Container now builds the context directly from the execution/task data without extra lookups or name fields, then serializes via `serde_json::to_string` (`crates/local-deployment/src/container.rs:826`). - MCP server reuses `serde_json::from_str` to hydrate the env payload—no custom helpers needed (`crates/server/src/mcp/task_server.rs:250`). `cargo check` All review notes addressed; context stays minimal and derived straight from the execution context. - Context constructor now takes everything from the `ExecutionContext` itself—no extra project fetch or manual process ID threading—so we pass `execution_ctx.execution_process.id` (`crates/local-deployment/src/container.rs:826`). - `VkMcpContext::new` remains the lightweight helper; no other fields changed. - `cargo check` still green. That should satisfy the review comment; let me know if you want the helper to move inside `ExecutionContext` itself later. You’re right to call this out. The problem is that I kept reconstructing the MCP payload by pulling individual fields out of `execution_ctx` instead of passing the whole context through a helper. That meant I was effectively duplicating the same data extraction in several spots, which is exactly what you asked me to avoid. Even after I added `VkMcpContext::new`, I was still feeding it a pile of individual values and even chasing the project row myself. The fix is straightforward: provide a helper that takes the database `ExecutionContext` (already returned by `ExecutionProcess::load_context`) and use that directly. I’ve added `VkMcpContext::from_execution(&execution_ctx)` and switched the container to call it, so we call `serde_json::to_string(VkMcpContext::from_execution(&execution_ctx))`. No extra project lookups, no manual argument lists, and the helper itself is exactly where you expect it. Let me know if you want to drop the old `new(...)` constructor now that the context-based helper exists. Simplify rebase fixes * add VK_MCP_CONTEXT_JSON env var to MCP config * Version with vk context endpoint (vibe-kanban aa3bbf5f) In the last few commits we added the vk mcp context. I wanna refactor it to instead be an endpoint on normal vibe kanban, .../context, it takes a path in the body. It then uses a mehtod simialar to what's used for cleanup_orphaned_worktrees in crates/local-deployment/src/container.rs We don't inject context using env anymore. If the path is without vk context the mcp should return that info, too * Revert "add VK_MCP_CONTEXT_JSON env var to MCP config" This reverts commit c26ff986fc3533d63d4309b956b6d16f0fb0fa5b. * Revert "**MCP Context Wiring**" This reverts commit 62fc206dc79543fcee7895e2587eb790f019ddbb. * Revert "**MCP Context Wiring**" This reverts commit 0e988c859036124f0ccb5aef578c83b12b708e1a. * Move ctx endpoint to container * Prompts * Cleanup * Fetch context on init * surface error * comments * Rename to context
This commit is contained in:
@@ -53,6 +53,8 @@ fn main() -> anyhow::Result<()> {
|
||||
};
|
||||
|
||||
let service = TaskServer::new(&base_url)
|
||||
.init()
|
||||
.await
|
||||
.serve(stdio())
|
||||
.await
|
||||
.map_err(|e| {
|
||||
|
||||
@@ -3,7 +3,7 @@ use std::{future::Future, path::PathBuf, str::FromStr};
|
||||
use db::models::{
|
||||
project::Project,
|
||||
task::{CreateTask, Task, TaskStatus, TaskWithAttemptStatus, UpdateTask},
|
||||
task_attempt::TaskAttempt,
|
||||
task_attempt::{TaskAttempt, TaskAttemptContext},
|
||||
};
|
||||
use executors::{executors::BaseCodingAgent, profile::ExecutorProfileId};
|
||||
use rmcp::{
|
||||
@@ -18,7 +18,7 @@ use serde::{Deserialize, Serialize, de::DeserializeOwned};
|
||||
use serde_json;
|
||||
use uuid::Uuid;
|
||||
|
||||
use crate::routes::task_attempts::CreateTaskAttemptBody;
|
||||
use crate::routes::{containers::ContainerQuery, task_attempts::CreateTaskAttemptBody};
|
||||
|
||||
#[derive(Debug, Deserialize, schemars::JsonSchema)]
|
||||
pub struct CreateTaskRequest {
|
||||
@@ -239,6 +239,18 @@ pub struct TaskServer {
|
||||
client: reqwest::Client,
|
||||
base_url: String,
|
||||
tool_router: ToolRouter<TaskServer>,
|
||||
context: Option<McpContext>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, schemars::JsonSchema)]
|
||||
pub struct McpContext {
|
||||
pub project_id: Uuid,
|
||||
pub task_id: Uuid,
|
||||
pub task_title: String,
|
||||
pub attempt_id: Uuid,
|
||||
pub attempt_branch: String,
|
||||
pub attempt_target_branch: String,
|
||||
pub executor: String,
|
||||
}
|
||||
|
||||
impl TaskServer {
|
||||
@@ -247,8 +259,63 @@ impl TaskServer {
|
||||
client: reqwest::Client::new(),
|
||||
base_url: base_url.to_string(),
|
||||
tool_router: Self::tool_router(),
|
||||
context: None,
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn init(mut self) -> Self {
|
||||
let context = self.fetch_context_at_startup().await;
|
||||
|
||||
if context.is_none() {
|
||||
self.tool_router.map.remove("get_context");
|
||||
tracing::debug!("VK context not available, get_context tool will not be registered");
|
||||
} else {
|
||||
tracing::info!("VK context loaded, get_context tool available");
|
||||
}
|
||||
|
||||
self.context = context;
|
||||
self
|
||||
}
|
||||
|
||||
async fn fetch_context_at_startup(&self) -> Option<McpContext> {
|
||||
let current_dir = std::env::current_dir().ok()?;
|
||||
let canonical_path = current_dir.canonicalize().unwrap_or(current_dir);
|
||||
let normalized_path = utils::path::normalize_macos_private_alias(&canonical_path);
|
||||
|
||||
let url = self.url("/api/containers/attempt-context");
|
||||
let query = ContainerQuery {
|
||||
container_ref: normalized_path.to_string_lossy().to_string(),
|
||||
};
|
||||
|
||||
let response = tokio::time::timeout(
|
||||
std::time::Duration::from_millis(500),
|
||||
self.client.get(&url).query(&query).send(),
|
||||
)
|
||||
.await
|
||||
.ok()?
|
||||
.ok()?;
|
||||
|
||||
if !response.status().is_success() {
|
||||
return None;
|
||||
}
|
||||
|
||||
let api_response: ApiResponseEnvelope<TaskAttemptContext> = response.json().await.ok()?;
|
||||
|
||||
if !api_response.success {
|
||||
return None;
|
||||
}
|
||||
|
||||
let ctx = api_response.data?;
|
||||
Some(McpContext {
|
||||
project_id: ctx.project.id,
|
||||
task_id: ctx.task.id,
|
||||
task_title: ctx.task.title,
|
||||
attempt_id: ctx.task_attempt.id,
|
||||
attempt_branch: ctx.task_attempt.branch,
|
||||
attempt_target_branch: ctx.task_attempt.target_branch,
|
||||
executor: ctx.task_attempt.executor,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize)]
|
||||
@@ -322,6 +389,15 @@ impl TaskServer {
|
||||
|
||||
#[tool_router]
|
||||
impl TaskServer {
|
||||
#[tool(
|
||||
description = "Return project, task, and attempt metadata for the current task attempt context."
|
||||
)]
|
||||
async fn get_context(&self) -> Result<CallToolResult, ErrorData> {
|
||||
// Context was fetched at startup and cached
|
||||
// This tool is only registered if context exists, so unwrap is safe
|
||||
let context = self.context.as_ref().expect("VK context should exist");
|
||||
TaskServer::success(context)
|
||||
}
|
||||
#[tool(
|
||||
description = "Create a new task/ticket in a project. Always pass the `project_id` of the project you want to create the task in - it is required!"
|
||||
)]
|
||||
@@ -591,16 +667,20 @@ impl TaskServer {
|
||||
#[tool_handler]
|
||||
impl ServerHandler for TaskServer {
|
||||
fn get_info(&self) -> ServerInfo {
|
||||
let mut instruction = "A task and project management server. If you need to create or update tickets or tasks then use these tools. Most of them absolutely require that you pass the `project_id` of the project that you are currently working on. You can get project ids by using `list projects`. Call `list_tasks` to fetch the `task_ids` of all the tasks in a project`.. TOOLS: 'list_projects', 'list_tasks', 'create_task', 'start_task_attempt', 'get_task', 'update_task', 'delete_task'. Make sure to pass `project_id` or `task_id` where required. You can use list tools to get the available ids.".to_string();
|
||||
if self.context.is_some() {
|
||||
let context_instruction = "Use 'get_context' to fetch project/task/attempt metadata for the active Vibe Kanban attempt when available.";
|
||||
instruction = format!("{} {}", context_instruction, instruction);
|
||||
}
|
||||
|
||||
ServerInfo {
|
||||
protocol_version: ProtocolVersion::V_2025_03_26,
|
||||
capabilities: ServerCapabilities::builder()
|
||||
.enable_tools()
|
||||
.build(),
|
||||
capabilities: ServerCapabilities::builder().enable_tools().build(),
|
||||
server_info: Implementation {
|
||||
name: "vibe-kanban".to_string(),
|
||||
version: "1.0.0".to_string(),
|
||||
},
|
||||
instructions: Some("A task and project management server. If you need to create or update tickets or tasks then use these tools. Most of them absolutely require that you pass the `project_id` of the project that you are currently working on. This should be provided to you. Call `list_tasks` to fetch the `task_ids` of all the tasks in a project`. TOOLS: 'list_projects', 'list_tasks', 'create_task', 'start_task_attempt', 'get_task', 'update_task', 'delete_task'. Make sure to pass `project_id` or `task_id` where required. You can use list tools to get the available ids.".to_string()),
|
||||
instructions: Some(instruction),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4,7 +4,7 @@ use axum::{
|
||||
response::Json as ResponseJson,
|
||||
routing::get,
|
||||
};
|
||||
use db::models::task_attempt::TaskAttempt;
|
||||
use db::models::task_attempt::{TaskAttempt, TaskAttemptContext};
|
||||
use deployment::Deployment;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use ts_rs::TS;
|
||||
@@ -20,7 +20,7 @@ pub struct ContainerInfo {
|
||||
pub project_id: Uuid,
|
||||
}
|
||||
|
||||
#[derive(Debug, Deserialize)]
|
||||
#[derive(Debug, Deserialize, Serialize)]
|
||||
pub struct ContainerQuery {
|
||||
#[serde(rename = "ref")]
|
||||
pub container_ref: String,
|
||||
@@ -49,6 +49,26 @@ pub async fn get_container_info(
|
||||
Ok(ResponseJson(ApiResponse::success(container_info)))
|
||||
}
|
||||
|
||||
pub fn router(_deployment: &DeploymentImpl) -> Router<DeploymentImpl> {
|
||||
Router::new().route("/containers/info", get(get_container_info))
|
||||
pub async fn get_context(
|
||||
State(deployment): State<DeploymentImpl>,
|
||||
Query(payload): Query<ContainerQuery>,
|
||||
) -> Result<ResponseJson<ApiResponse<TaskAttemptContext>>, ApiError> {
|
||||
let result =
|
||||
TaskAttempt::resolve_container_ref(&deployment.db().pool, &payload.container_ref).await;
|
||||
|
||||
match result {
|
||||
Ok((attempt_id, task_id, project_id)) => {
|
||||
let ctx =
|
||||
TaskAttempt::load_context(&deployment.db().pool, attempt_id, task_id, project_id)
|
||||
.await?;
|
||||
Ok(ResponseJson(ApiResponse::success(ctx)))
|
||||
}
|
||||
Err(e) => Err(ApiError::Database(e)),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn router(_deployment: &DeploymentImpl) -> Router<DeploymentImpl> {
|
||||
Router::new()
|
||||
.route("/containers/info", get(get_container_info))
|
||||
.route("/containers/attempt-context", get(get_context))
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user