bd09ba8bd0591ff4521a21883456232548adf6ca
466 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
bd09ba8bd0 | chore: bump version to 0.0.148 | ||
|
|
29595c1929 |
Add Azure DevOps repository support for PR operations (Vibe Kanban) (#1808)
* All tasks are complete. Let me summarize what was implemented:
I've successfully implemented Azure DevOps repos support alongside the existing GitHub integration. Here's what was done:
- **`mod.rs`** - Defines the `GitHostService` trait and factory functions for creating services
- **`types.rs`** - Unified types: `GitHostProvider`, `RepoInfo`, `GitHostError`, `CreatePrRequest`, `UnifiedPrComment`
- **`detection.rs`** - URL pattern matching for auto-detecting GitHub (including Enterprise) and Azure DevOps
- **`github/mod.rs`** - `GitHubHostService` implementing the trait
- **`github/cli.rs`** - `GhCli` wrapper (moved from old location)
- **`azure/mod.rs`** - `AzureHostService` implementing the trait
- **`azure/cli.rs`** - `AzCli` wrapper for Azure DevOps CLI commands
1. **Auto-detection**: The system detects the provider from the git remote URL:
- `github.com` → GitHub
- `github.*.com` → GitHub Enterprise
- `dev.azure.com`, `*.visualstudio.com`, or URLs with `/_git/` → Azure DevOps
2. **Unified trait interface**: Both providers implement `GitHostService` with methods for:
- `get_repo_info()` - Get repository identification
- `check_auth()` - Verify CLI authentication
- `create_pr()` - Create pull requests
- `get_pr_status()` - Get PR status by URL
- `list_prs_for_branch()` - List PRs for a branch
- `get_pr_comments()` - Get PR comments
3. **Azure CLI commands supported**:
- `az repos pr create` - Create PRs
- `az repos pr show` - View PR details
- `az repos pr list` - List PRs
- `az repos pr list-threads` - Get PR comments
- `az account show` - Auth check
- `pr.rs` - Uses auto-detection for PR creation
- `pr_monitor.rs` - Uses provider detection from PR URL
- `task_attempts.rs` - Uses git_host for push auth checking
- `error.rs` - Updated `GitHost` error variant
- `share.rs` - Updated `GitHost` error variant
- `generate_types.rs` - Updated type exports
- Old `crates/services/src/services/github.rs` module and directory (replaced by git_host)
Cleanup script changes for workspace 28eb9564-f95a-42e0-8cf2-5228c2ffc0bf
All tests pass, including the new URL decoding tests. The fix is complete - the URL-encoded components (like `Vibe%20Kanban%20Test`) will now be properly decoded to `Vibe Kanban Test` before being passed to the Azure CLI.
Cleanup script changes for workspace 28eb9564-f95a-42e0-8cf2-5228c2ffc0bf
All done! Here's a summary of the changes:
Updated `get_repo_info()` to use the Azure CLI's `az repos show --detect true` command instead of manual URL parsing, similar to how GitHub uses `gh repo view --json owner,name`.
1. **Added `extract_repo_name_from_url()`** - A simple helper that extracts just the repository name from Azure DevOps URLs (looks for `/_git/{repo}` pattern). This is still needed because `az repos show` requires the `--repository` parameter.
2. **Updated `get_repo_info()`** - Now uses `az repos show --repository {name} --detect true --output json` which:
- Auto-detects organization and project from git config
- Returns canonical names (properly decoded)
- Validates the repo exists
3. **Added `extract_org_from_response()`** - Extracts the organization from the `remoteUrl` field in the CLI response, with fallback to URL parsing.
4. **Fixed `run()` method** - Now properly passes the `dir` parameter to `Command::current_dir()` so the `--detect` flag works correctly.
5. **Kept `parse_azure_remote_url()`** - Still used as a fallback for organization extraction and for PR URL parsing.
6. **Added tests** - New tests for `extract_repo_name_from_url()` covering various URL formats.
Cleanup script changes for workspace 28eb9564-f95a-42e0-8cf2-5228c2ffc0bf
Done! The cleanup is complete. Here's what was removed:
Removed the brittle `parse_azure_remote_url()` function and related code:
1. **Removed `AzureRepoInfo` struct** - No longer needed
2. **Removed `parse_azure_remote_url()` function** - The ~90-line function that manually parsed various Azure DevOps URL formats
3. **Updated `get_repo_info()`** - Changed from fallback (`unwrap_or_else`) to error (`ok_or_else`) when organization can't be extracted from the CLI response
4. **Removed 4 tests** - `test_parse_azure_remote_url_https`, `test_parse_azure_remote_url_visualstudio`, `test_parse_azure_remote_url_ssh`, `test_parse_azure_remote_url_with_spaces`
The code now relies entirely on:
- `extract_repo_name_from_url()` - Simple helper to get repo name from URL (for passing to `az repos show`)
- `az repos show --detect true` - Azure CLI command that auto-detects org/project from git config
- `extract_org_from_response()` - Extracts org from the CLI's `remoteUrl` JSON field
The implementation is complete. Here's a summary of the changes:
Simplified the Azure DevOps integration by using the CLI's `--detect` flag instead of parsing URLs to extract org/project/repo.
1. **`types.rs`** - Changed `RepoInfo::AzureDevOps` to store just `repo_path: PathBuf` instead of `organization`, `project`, and `repo_name`. Removed the unused `repo_name()` method.
2. **`cli.rs`** - Major simplification:
- Removed URL parsing helpers: `decode_url_component()`, `extract_repo_name_from_url()`, `extract_org_from_response()`
- `get_repo_info()` now just verifies it's a valid Azure repo via `az repos show --detect true` and returns the path
- `create_pr()` now takes `repo_path` and uses `--detect true` instead of explicit org/project/repo
- `list_prs_for_branch()` now takes `repo_path` and uses `--detect true`
- `get_pr_threads()` now takes `repo_path` and uses `--detect true`
- `view_pr()` now takes the full PR URL and parses org + PR ID internally
- `parse_pr_url()` simplified to return just `(org, pr_id)` instead of 4 values
- Parsing functions simplified to extract URLs from response's `repository.webUrl` field
- Updated tests for the new API
3. **`mod.rs`** - Updated `AzureHostService` to:
- Extract `repo_path` from `RepoInfo::AzureDevOps`
- Pass `repo_path` to CLI methods that need it
- Pass the PR URL directly to `view_pr()` for `get_pr_status()`
- URL parsing reduced to minimal extraction of org + PR ID from PR URLs (only needed for `get_pr_status()`)
- Repo-based operations don't parse URLs at all - CLI auto-detects via git config
- Much cleaner, less brittle code
Cleanup script changes for workspace 28eb9564-f95a-42e0-8cf2-5228c2ffc0bf
All done. The implementation is complete:
**Summary of changes:**
1. **`types.rs`**: Updated `RepoInfo::AzureDevOps` to store `organization_url`, `project`, `repo_name` instead of `repo_path`
2. **`cli.rs`**:
- `get_repo_info()` now parses the JSON response from `az repos show --detect true` to extract repo name, project name, and organization URL
- Added `extract_org_url()` helper to extract the base org URL from the API URL
- Updated `create_pr()`, `list_prs_for_branch()`, `get_pr_threads()` to take explicit `organization_url`, `project`, `repo_name` params
3. **`mod.rs`** (`AzureHostService`):
- `create_pr()`, `list_prs_for_branch()`, `get_pr_comments()` all now extract org/project/repo from `RepoInfo` and pass them to the CLI methods
The key insight was that the `url` field from `az repos show` (e.g., `https://dev.azure.com/{org}/.../_apis/...`) can be truncated to `https://dev.azure.com/{org}` and used directly with the `--organization` flag — minimal URL parsing needed.
Cleanup script changes for workspace 28eb9564-f95a-42e0-8cf2-5228c2ffc0bf
* Done. Refactored all 4 `serde_json::Value` parsing sites to use typed structs:
**Changes to `cli.rs`:**
1. **Added typed response structs** with `#[derive(Deserialize)]`:
- `AzRepoShowResponse`, `AzProject` - for `az repos show`
- `AzPrResponse`, `AzRepository`, `AzCommit` - for PR responses
- `AzThread`, `AzThreadContext`, `AzFilePosition`, `AzThreadComment`, `AzAuthor` - for thread comments
2. **Updated parsing functions**:
- `get_repo_info()` - now deserializes into `AzRepoShowResponse`
- `parse_pr_response()` - now deserializes into `AzPrResponse`
- `parse_pr_list_response()` - now deserializes into `Vec<AzPrResponse>`
- `parse_pr_threads()` - now deserializes into `Vec<AzThread>`
3. **Replaced `extract_pr_info()`** with `az_pr_to_info()` that takes the typed struct directly
4. **Removed `serde_json::Value` import** - no longer needed
The code is now cleaner with no `.get().and_then().as_str()` chains, and serde provides better error messages on parse failures.
* Done. Added the missing `--repository` parameter to `get_pr_threads()`:
1. **`cli.rs`**: Added `repo_name: &str` parameter and `--repository` flag to the CLI command
2. **`mod.rs`**: Updated `get_pr_comments()` to extract both `organization_url` and `repo_name` from `RepoInfo` and pass them to the CLI
Done. Here's a summary of the changes:
**Changes made to `crates/services/src/services/git_host/azure/cli.rs`:**
1. **Removed git2 dependency** - No longer imports `git2::Repository`
2. **Added new typed structs** for `az repos list` response:
- `AzRepoListItem` - with `name`, `project`, and `remote_url` fields
- `AzRepoProject` - with `name` field
3. **Rewrote `get_repo_info()`** to use `az repos list --detect true`:
- Runs CLI command from the repo directory to auto-detect org/project
- Takes the first repo from the list (assuming single repo per directory)
- Extracts org from `remoteUrl` field
4. **Replaced `parse_azure_remote_url()`** with simpler `extract_org_from_remote_url()`:
- Only extracts the organization name from the URL
- Handles both `dev.azure.com` and `visualstudio.com` formats
- Much simpler than the previous 3-tuple URL parser
5. **Updated tests** - Replaced the old URL parsing tests with new `extract_org_from_remote_url` tests
The key improvement is that `az repos list --detect true` uses the Azure CLI's built-in detection (from git config) to find the org and project, then returns all repo info including project name. This avoids git2 library usage and complex URL parsing entirely.
Done. Here's a summary of all changes:
**Summary of Changes**
1. **`crates/services/src/services/git_host/types.rs`**:
- Added `project_id` and `repo_id` fields to `RepoInfo::AzureDevOps` enum variant
2. **`crates/services/src/services/git_host/azure/cli.rs`**:
- Added `id` field to `AzRepoListItem` struct
- Added `id` field to `AzRepoProject` struct
- Added `AzThreadsResponse` struct to wrap the `{ "value": [...] }` REST API response
- Updated `get_repo_info()` to return `project_id` and `repo_id` from `az repos list` output
- Updated `get_pr_threads()` to:
- Accept `project_id` and `repo_id` instead of names
- Use `az rest` with REST API URL instead of non-existent `az repos pr list-threads`
- Updated `parse_pr_threads()` to unwrap the `value` field from REST API response
3. **`crates/services/src/services/git_host/azure/mod.rs`**:
- Updated `create_pr` and `list_prs_for_branch` to use `..` pattern for new fields
- Updated `get_pr_comments()` to extract and pass `project_id` and `repo_id` to CLI
**Key Improvements**:
- Uses UUIDs (`project_id`, `repo_id`) instead of names for REST API calls, avoiding URL encoding issues with names containing spaces like "Vibe Kanban Test"
- Fixed `get_pr_threads()` to use `az rest` since `az repos pr list-threads` doesn't exist as an Azure CLI command
- All unit tests pass
Cleanup script changes for workspace 28eb9564-f95a-42e0-8cf2-5228c2ffc0bf
Done. The fix adds `--resource 499b84ac-1321-427f-aa17-267ca6975798` to the `az rest` command, which tells it to request an Azure DevOps token instead of a generic Azure token. This should now properly authenticate to the Azure DevOps REST API.
Done. The fix replaces `az rest` with `az devops invoke --area git --resource pullRequestThreads`. This uses the same authentication as other `az repos pr` commands, so it should work without requiring the separate "identity materialization" step.
Done. The fix replaces the single combined string for `--route-parameters` with individual arguments for each `key=value` pair. This matches how the Azure CLI expects the parameters to be passed - each route parameter should be a separate shell argument after the `--route-parameters` flag.
Done. Removed all the requested comments from:
- `crates/server/src/routes/task_attempts.rs` (lines 451, 490)
- `crates/server/src/routes/task_attempts/pr.rs` (lines 283, 516)
- `crates/services/src/services/git_host/azure/cli.rs` (section header comment, doc comments on structs and methods)
* Done. Removed the module-level doc comment and all the method doc comments from the `GitHostService` trait and public functions in `mod.rs`.
Done. Removed the obvious/redundant comments from `types.rs`.
Done. Removed the unnecessary `Arc` usage from all three functions in `mod.rs`. The strings are already cloned when extracted from `RepoInfo`, so we can just clone them directly in the retry closure.
* All tests pass. Fixed the `npm run check` and `npm run lint` issues by:
1. Regenerated TypeScript types from Rust
2. Updated `frontend/src/lib/api.ts` to use `CreatePrApiRequest` instead of `CreateGitHubPrRequest`
3. Updated `CreatePRDialog.tsx` to handle the new error types (`cli_not_installed`/`cli_not_logged_in` with a `provider` field instead of `github_cli_not_installed`/`github_cli_not_logged_in`)
4. Fixed clippy warnings about collapsible if statements in `parse_pr_url()`
* Done. I've made `url`, `diff_hunk`, and `author_association` optional in the `UnifiedPrComment` type:
**Changes made:**
1. **`crates/services/src/services/git_host/types.rs`** - Changed the `UnifiedPrComment` enum:
- `url: String` → `url: Option<String>`
- `diff_hunk: String` → `diff_hunk: Option<String>`
- `author_association: String` → `author_association: Option<String>`
2. **`crates/services/src/services/git_host/azure/cli.rs`** - Updated Azure comment creation to use `None` instead of empty strings
3. **`crates/services/src/services/git_host/github/mod.rs`** - Wrapped GitHub values in `Some()`
4. **`frontend/src/components/ui/github-comment-card.tsx`** - Updated interface to allow `url?: string | null` and `diffHunk?: string | null`
5. **`frontend/src/components/ui/wysiwyg/nodes/github-comment-node.tsx`**:
- Updated `NormalizedComment` interface
- Fixed click handler to check `if (data.url)` before opening
- Removed `data.url` from validation (only `id`, `comment_type`, `author`, `body` required)
All checks pass (`pnpm run check`, `pnpm run lint`, `cargo test --workspace`).
* Done. All three changes are complete:
1. **Removed unnecessary temp file** in `azure/cli.rs:create_pr()` - was writing body to temp file then immediately reading it back. Now just passes `body` directly to `OsString`.
2. **Added structs for JSON parsing** in `github/cli.rs:parse_pr_comments()` - replaced manual `item.get("field")...` parsing with `GhCommentsWrapper` and `GhCommentResponse` structs using serde.
3. **Added structs for JSON parsing** in `github/cli.rs:parse_pr_review_comments()` - replaced manual parsing with `GhReviewCommentResponse` struct.
All checks pass (`pnpm run check`, `pnpm run lint`, `cargo test --workspace`).
Cleanup script changes for workspace 28eb9564-f95a-42e0-8cf2-5228c2ffc0bf
Done. Both changes are complete:
1. **Inlined single-field structs** - Consolidated `GhCommentAuthor` and `GhReviewCommentUser` into a single `GhUserLogin` struct that's reused by both `GhCommentResponse` and `GhReviewCommentResponse`.
2. **Converted `extract_pr_info` to struct parsing** - Added `GhPrResponse` and `GhMergeCommit` structs. Replaced the manual `Value::get()` parsing with serde deserialization. The new `pr_response_to_info()` helper converts the parsed struct to `PullRequestInfo`. Also removed the now-unused `serde_json::Value` import.
All checks pass (`pnpm run check`, `pnpm run lint`, `cargo test --workspace`).
Cleanup script changes for workspace 28eb9564-f95a-42e0-8cf2-5228c2ffc0bf
* Done. I've updated `detection.rs` to use `GitCli` instead of `git2`:
1. **Added `get_remote_url` method to `GitCli`** (`git/cli.rs`) - Uses `git remote get-url <remote>` to get the URL
2. **Updated `detect_provider`** (`git_host/detection.rs`) - Now uses `GitCli::get_remote_url()` instead of `git2::Repository`
This keeps the codebase consistent with the pattern of using `GitCli` for git operations rather than mixing `git2` directly in various modules.
All checks pass (`pnpm run check`, `pnpm run lint`, `cargo test --workspace`).
Cleanup script changes for workspace 28eb9564-f95a-42e0-8cf2-5228c2ffc0bf
* Done! The refactor is complete. Here's a summary of the changes:
**New Architecture:**
1. **`ProviderKind` enum** (was `GitHostProvider`) - The discriminant for provider types (GitHub, AzureDevOps, Unknown)
2. **`GitHostProvider` trait** (was `GitHostService`) - The interface that provider implementations must satisfy
3. **`GitHubProvider` struct** (was `GitHubHostService`) - GitHub implementation
4. **`AzureDevOpsProvider` struct** (was `AzureHostService`) - Azure DevOps implementation
5. **`GitHostService` struct** (new) - Main entry point that:
- Holds repo context (`repo_path`, `GitCli`)
- Provides `provider_for_remote(remote_name)` to get a provider for a specific remote
- Provides `provider_from_url(url)` for URL-based detection (e.g., PR URLs)
- Provides `provider_for_kind(kind)` for direct provider construction
**Benefits:**
- Provider detection is now contextual - callers specify which remote they want
- The service struct encapsulates provider detection logic
- Cleaner separation: `GitHostService` is the facade, `GitHostProvider` trait is the contract
- Ready for multi-remote support (different providers for different remotes)
All checks pass (`pnpm run check`, `pnpm run lint`, `cargo test --workspace`).
Cleanup script changes for workspace 28eb9564-f95a-42e0-8cf2-5228c2ffc0bf
All the call sites have been updated to use the new cleaner `GitHostService` facade API. Here's a summary of the changes:
**Changes made:**
1. **`pr.rs:414-418` (`attach_existing_pr`)**: Simplified from:
```rust
let git_host = git_host::GitHostService::new(&repo.path);
let git_host_provider = git_host.provider_for_remote("origin")?;
let repo_info = git_host_provider.get_repo_info(&repo.path).await?;
let prs = git_host_provider.list_prs_for_branch(&repo_info, &workspace.branch).await?;
```
To:
```rust
let git_host = git_host::GitHostService::new(&repo.path);
let prs = git_host.list_prs_for_branch(&workspace.branch).await?;
```
2. **`pr.rs:511-525` (`get_pr_comments`)**: Simplified from provider-based pattern to:
```rust
let git_host = git_host::GitHostService::new(&repo.path);
let provider = git_host.provider_kind()?;
match git_host.get_pr_comments(pr_info.number).await { ... }
```
3. **`task_attempts.rs:451-453` (`push_task_attempt_branch`)**: Removed explicit `check_auth()` call since auth is now checked during the actual operation. Just verify provider is supported:
```rust
let git_host = git_host::GitHostService::new(&worktree_path);
git_host.provider_kind()?;
```
4. **`task_attempts.rs:490-492` (`force_push_task_attempt_branch`)**: Same simplification.
The new API is much cleaner - call sites just create a `GitHostService::new(path)` and call methods directly without dealing with providers, repo info, or explicit auth checks.
Cleanup script changes for workspace 28eb9564-f95a-42e0-8cf2-5228c2ffc0bf
The refactoring is complete. Here's a summary of what was done:
**Changes made:**
1. **`crates/services/Cargo.toml`** - Added `enum_dispatch` dependency
2. **`crates/services/src/services/git_host/mod.rs`** - Completely rewritten:
- `GitHostProvider` is now a public trait with `#[enum_dispatch(GitHostService)]`
- `GitHostService` is now an enum with `GitHub` and `AzureDevOps` variants
- Constructors: `from_pr_url(url)` and `from_remote_url(url)` - no `new()` method
- Methods now take `repo_path` as a parameter where needed
3. **`crates/services/src/services/git_host/github/mod.rs`** - Updated to match new trait signature (methods take `repo_path`, `check_auth` called inside `create_pr`)
4. **`crates/services/src/services/git_host/azure/mod.rs`** - Updated similarly
5. **`crates/services/src/services/git.rs`** - Added `get_remote_url()` method to `GitService`
6. **Call site updates:**
- `pr_monitor.rs` - Uses `from_pr_url`
- `pr.rs` - Uses `get_remote_name_from_branch_name` + `get_remote_url` + `from_remote_url`
- `task_attempts.rs` - Same pattern for push functions
**New API usage:**
```rust
// From a PR URL (e.g., for checking PR status)
let git_host = GitHostService::from_pr_url(&pr_url)?;
let status = git_host.get_pr_status(&pr_url).await?;
// From a remote URL (e.g., for creating PRs)
let remote_name = git_service.get_remote_name_from_branch_name(&repo_path, &branch)?;
let remote_url = git_service.get_remote_url(&repo_path, &remote_name)?;
let git_host = GitHostService::from_remote_url(&remote_url)?;
let pr_info = git_host.create_pr(&repo_path, &pr_request).await?;
// Get provider kind (infallible since already constructed)
let provider = git_host.provider_kind();
```
Cleanup script changes for workspace 28eb9564-f95a-42e0-8cf2-5228c2ffc0bf
All cleaned up. Here's the final architecture:
**`git_host/mod.rs`:**
```rust
pub trait GitHostProvider: Send + Sync {
async fn create_pr(&self, repo_path: &Path, request: &CreatePrRequest) -> Result<PullRequestInfo, GitHostError>;
async fn get_pr_status(&self, pr_url: &str) -> Result<PullRequestInfo, GitHostError>;
async fn list_prs_for_branch(&self, repo_path: &Path, branch_name: &str) -> Result<Vec<PullRequestInfo>, GitHostError>;
async fn get_pr_comments(&self, repo_path: &Path, pr_number: i64) -> Result<Vec<UnifiedPrComment>, GitHostError>;
fn provider_kind(&self) -> ProviderKind;
}
pub enum GitHostService {
GitHub(GitHubProvider),
AzureDevOps(AzureDevOpsProvider),
}
impl GitHostService {
pub fn from_url(url: &str) -> Result<Self, GitHostError>
}
```
**Changes made:**
1. Removed `check_auth` and `get_repo_info` from the public trait - they're now private methods on each provider
2. Consolidated `from_pr_url` and `from_remote_url` into a single `from_url` constructor
3. The trait now has exactly 5 methods - the minimal public API needed by call sites
**Call site usage is clean:**
```rust
// From any URL (PR URL or remote URL)
let git_host = GitHostService::from_url(&url)?;
// Use the trait methods
let pr_info = git_host.create_pr(&repo_path, &request).await?;
let status = git_host.get_pr_status(&pr_url).await?;
let prs = git_host.list_prs_for_branch(&repo_path, &branch).await?;
let comments = git_host.get_pr_comments(&repo_path, pr_number).await?;
let kind = git_host.provider_kind();
```
Cleanup script changes for workspace 28eb9564-f95a-42e0-8cf2-5228c2ffc0bf
Done. The refactoring is complete:
1. **Created `GitHubRepoInfo`** in `git_host/github/cli.rs` - a simple struct with `owner` and `repo_name`
2. **Created `AzureRepoInfo`** in `git_host/azure/cli.rs` - a struct with `organization_url`, `project`, `project_id`, `repo_name`, `repo_id`
3. **Updated both providers** to use their respective provider-specific structs directly, removing all the match arms and error cases for "wrong variant" that could never happen
4. **Removed the shared `RepoInfo` enum** from `types.rs` and the public exports
The code is cleaner now - each provider has its own internal representation of repo info, and there's no shared enum that requires impossible error handling paths.
Cleanup script changes for workspace 28eb9564-f95a-42e0-8cf2-5228c2ffc0bf
The fix compiles. The issue was that `get_remote_name_from_branch_name` was calling `get_remote_from_branch_ref` which uses `repo.branch_remote_name()` - this git2 function requires the branch to have upstream tracking configured. For local branches that haven't been pushed yet (or don't have tracking set), this fails with the error you saw.
The fix changes the behavior to fall back to the default remote (typically "origin") when the branch doesn't have upstream tracking configured. This makes sense because:
1. Most repos only have one remote anyway
2. When creating a PR for a local branch, we need to push to *some* remote - the default is the sensible choice
3. The original code computed `default_remote` but never actually used it as a fallback
Done. Added `get_remote_url_from_branch_or_default` to `GitService` and updated all 4 call sites. The pattern is now:
```rust
let remote_url = deployment
.git()
.get_remote_url_from_branch_or_default(&repo_path, &branch)?;
let git_host = git_host::GitHostService::from_url(&remote_url)?;
```
Two lines instead of four, and the function name makes the fallback behavior explicit.
Done. Removed the unnecessary provider detection checks from both `push_task_attempt_branch` and `force_push_task_attempt_branch`. The push operations now just push directly - git push works with any remote regardless of whether we support it as a PR provider. Auth errors from git push itself (SSH keys, HTTPS credentials) still come through `GitServiceError::GitCLI(...)` and propagate to the frontend the same way.
* All done. Removed the low-value comments:
- "Create git host from the PR URL"
- "Fetch comments"
- "Get the URL for a remote (e.g., "origin")."
- "Azure DevOps-specific repository information"
- "GitHub-specific repository information"
- "Get the remote URL for the branch to detect the provider" (2 places)
- "Get repository info from the GitHub CLI"
- "Get repository info from the Azure CLI"
- "Check authentication status" (2 places)
Kept comments that provide actual value like the fallback behavior explanation on `get_remote_url_from_branch_or_default`.
All done. Removed the low-value comments:
- "Create git host from the PR URL"
- "Fetch comments"
- "Get the URL for a remote (e.g., "origin")."
- "Azure DevOps-specific repository information"
- "GitHub-specific repository information"
- "Get the remote URL for the branch to detect the provider" (2 places)
- "Get repository info from the GitHub CLI"
- "Get repository info from the Azure CLI"
- "Check authentication status" (2 places)
- "Get PR status from URL (auto-detects provider)"
- "High-level errors originating from the Azure CLI."
- "High-level errors originating from the GitHub CLI."
- "Azure DevOps hosting provider implementation"
- "GitHub hosting provider implementation"
- "Newtype wrapper for invoking the `gh` command."
- "Newtype wrapper for invoking the `az` command."
- "Trait for git hosting provider operations (GitHub, Azure DevOps, etc.)"
- All the trait method doc comments (Create a pull request, Get PR status, etc.)
- "Git hosting service - dispatches to the appropriate provider."
- "Create a GitHostService by detecting the provider from a URL."
- Simplified the Azure DevOps pattern comments in detection.rs
* Done. Here's a summary of the fix:
**Problem**: `get_repo_info` used `az repos list --detect true` which returns ALL repos in the Azure DevOps project, then just took the first one. This was wrong for multi-repo projects.
**Solution**:
1. `AzureDevOpsProvider` now stores the remote URL passed to `from_url()`
2. `get_repo_info` now takes the remote URL and filters the repo list to find the matching repo using `remote_urls_match()`
3. Added `remote_urls_match()` helper that normalizes URLs (lowercase, strips trailing `/` and `.git`) for comparison
4. Added `extract_organization_url()` to get the Azure organization URL from a remote URL (supports both `dev.azure.com` and legacy `visualstudio.com` formats)
5. Removed the now-unused `extract_org_from_remote_url()` function
The flow is now:
- Call site gets remote URL via `get_remote_url_from_branch_or_default()`
- Creates `GitHostService::from_url(&remote_url)` which stores the URL on `AzureDevOpsProvider`
- When `get_repo_info` is called, it lists all repos in the project and finds the one matching the stored remote URL
Cleanup script changes for workspace 28eb9564-f95a-42e0-8cf2-5228c2ffc0bf
* Done. Summary of changes:
1. **`pr.rs:get_pr_comments`** - Now gets the remote URL from `workspace_repo.target_branch` instead of using the PR URL to create the `GitHostService`
2. **`azure/cli.rs`**:
- Added `ssh_url` field to `AzRepoListItem`
- Updated `get_repo_info` to detect if the input URL is SSH (`starts_with("git@")`) and compare against the appropriate field (`ssh_url` vs `remote_url`)
- Renamed `remote_urls_match` to `urls_match` since it now handles both HTTPS and SSH URLs
- Added SSH URL test case
* Build passes. The simple fix handles both SSH formats without adding a dependency.
* Build passes. Now `ssh_url` is optional - if it's not in the response and the user has an SSH remote configured, they'll get a clear error message ("No repo found matching remote URL: ...").
* Done. The `urls_match` function now normalizes `ssh://` prefix to scp-style before comparing.
* Done. Summary of changes:
1. **Removed `remote_url` field from `AzureDevOpsProvider`** - no longer stored on the struct
2. **Updated `GitHostProvider` trait** - added `remote_url: &str` parameter to:
- `create_pr`
- `list_prs_for_branch`
- `get_pr_comments`
3. **Updated `AzureDevOpsProvider`** - methods now accept `remote_url` and pass it to `get_repo_info`
4. **Updated `GitHubProvider`** - methods accept `_remote_url` (unused, since gh CLI auto-detects)
5. **Updated call sites in `pr.rs`** - all three methods now pass `remote_url`:
- `create_pr` passes `&remote_url`
- `list_prs_for_branch` passes `&remote_url`
- `get_pr_comments` passes `&remote_url`
6. **Fixed `attach_existing_pr`** - now uses `workspace_repo.target_branch` instead of `workspace.branch` when getting the remote URL (consistent with other call sites)
* Cleanup script changes for workspace 28eb9564-f95a-42e0-8cf2-5228c2ffc0bf
* Update frontend remote copy (vibe-kanban a7bf13b6)
We added azure support but in the frontend many things have github specific wording. Please fix!
* Done. I've addressed the PR comment by:
1. **Renamed `CreatePrError` to `PrError`** - A more generic name that works for both `create_pr` and `attach_existing_pr`
2. **Updated `attach_existing_pr`** to mirror the error handling from `create_pr`:
- Changed return type from `ApiResponse<AttachPrResponse>` to `ApiResponse<AttachPrResponse, PrError>`
- Added error handling for `GitHostService::from_url()` - handles `UnsupportedProvider` and `CliNotInstalled`
- Added error handling for `list_prs_for_branch()` - handles `CliNotInstalled` and `AuthFailed`
3. **Updated `generate_types.rs`** to use the renamed `PrError`
4. **Updated `frontend/src/lib/api.ts`** to use `PrError` instead of `CreatePrError`
5. **Regenerated TypeScript types** and verified all checks pass
|
||
|
|
3ad03a0a99 | chore: bump version to 0.0.147 | ||
|
|
64fe120e72 | Explicitly specify the SQLite journal mode (#1882) | ||
|
|
95ea78cef6 |
Revert "Reduce SQLITE db locks (#1806)" (#1878)
This reverts commit
|
||
|
|
47facf12ec | Fix worktree path handling on windows (#1859) | ||
|
|
af70dd9239 |
The implementation is complete. Here's a summary of the changes made: (#1871)
## Summary
I've implemented the GitHub PR status display on `WorkspaceSummary.tsx`. The changes include:
### Backend Changes
1. **`crates/db/src/models/merge.rs`** - Added a new method `get_latest_pr_status_for_workspaces()` that efficiently queries the latest PR status for all workspaces grouped by archived status.
2. **`crates/server/src/routes/task_attempts/workspace_summary.rs`** - Added `pr_status: Option<MergeStatus>` field to the `WorkspaceSummary` struct and included the PR status query in the summary endpoint.
### Frontend Changes
3. **`frontend/src/components/ui-new/hooks/useWorkspaces.ts`** - Added `prStatus` to the `SidebarWorkspace` interface and mapped it in `toSidebarWorkspace()`.
4. **`frontend/src/components/ui-new/primitives/WorkspaceSummary.tsx`** - Added:
- Import for `GitPullRequestIcon`
- `prStatus` prop to the component
- Display logic showing:
- **PR open**: `GitPullRequestIcon` with `text-brand` (orange) color
- **PR merged**: `GitPullRequestIcon` with `text-success` (green) color
- **No PR/closed/unknown**: No icon displayed
5. **`frontend/src/components/ui-new/views/WorkspacesSidebar.tsx`** - Passed the `prStatus` prop to both active and archived workspace summaries.
### Generated Types
6. **`shared/types.ts`** - Auto-generated to include the new `pr_status` field in `WorkspaceSummary`.
|
||
|
|
527febdc52 | Workspaces FE (#1733) | ||
|
|
fe2215ba85 | chore: bump version to 0.0.146 | ||
|
|
ae852b297f | chore: bump version to 0.0.145 | ||
|
|
312ace8708 |
ignore sqlx version mismatch (#1853)
* Fix migration checksum mismatch * only fix on windows |
||
|
|
0753a4ff94 |
Reset the diff view on target-branch change and post rebase (#1800)
* Reset the diff view on target-branch change and post rebase * Remove unbounded channel tokio watch is sufficient for watching git head changes in case of rebase. * fmt * use git service |
||
|
|
2f496086ea | non-blocking orphan worktree cleanup (#1807) | ||
|
|
dcaa8b8769 |
fix: copy images to agent_working_dir when set (#1752)
* fix: copy images to agent_working_dir when set When agent_working_dir is configured, copy images to <workspace>/<agent_working_dir>/.vibe-images/ instead of <workspace>/.vibe-images/ so that relative paths work correctly for the AI agent. Closes #1727 Signed-off-by: majiayu000 <1835304752@qq.com> * optionally append agent working directory in other image routes --------- Signed-off-by: majiayu000 <1835304752@qq.com> Co-authored-by: Gabriel Gordon-Hall <ggordonhall@gmail.com> |
||
|
|
cefb4fb0a2 | upgrade copilot cli (#1844) | ||
|
|
3749be636a |
WIP: use Opencode sdk (#1823)
Opencode ACP server is broken on windows. Using the Opencode HTTP SDK directly, works better. |
||
|
|
76feaefdfe |
feat: add Exa and Chrome DevTools as one-click MCP server options (Vibe Kanban) (#1801)
* I've added the Exa Code MCP server as a one-click install option. Here's what was configured: **Changes made to `crates/executors/default_mcp.json`:** 1. **Server configuration** (lines 23-33): - Command: `npx -y exa-mcp-server` - Tools enabled: `web_search_exa` and `get_code_context_exa` (as requested) - Environment variable: `EXA_API_KEY` (required for authentication) 2. **Metadata** (lines 53-58): - Name: "Exa" - Description: "Web search and code context retrieval powered by Exa AI" - URL: https://docs.exa.ai/reference/exa-mcp - Icon: `mcp/exa_logo.svg` (already exists) The Exa MCP server icon was already present in the frontend public assets at `frontend/public/mcp/exa_logo.svg`. * Done. I've added Chrome DevTools MCP as a one-click install option: **Server configuration** (lines 34-39): - Command: `npx chrome-devtools-mcp@latest` - No environment variables required **Metadata** (lines 65-70): - Name: "Chrome DevTools" - Description: "Browser automation, debugging and performance analysis with Chrome DevTools" - URL: https://github.com/ChromeDevTools/chrome-devtools-mcp - Icon: `mcp/chrome_devtools_logo.svg` * The issue was that `justify-center` and `gap-3` were interfering with Embla Carousel's scroll calculations. Embla uses a specific margin/padding system (`-ml-4` on the content container and `pl-4` on items) to handle spacing and scroll positions. Adding `justify-center` shifts items in a way that breaks the scroll math, making it impossible to scroll to the leftmost items. The fix removes these conflicting classes from `CarouselContent`. |
||
|
|
25c6d0a7c0 |
Reduce SQLITE db locks (#1806)
Common settings to reduce database locks. > 1. WAL is significantly faster in most scenarios. 2. WAL provides more concurrency as readers do not block writers and a writer does not block readers. Reading and writing can proceed concurrently. 3.Disk I/O operations tends to be more sequential using WAL. 4. WAL uses many fewer fsync() operations and is thus less vulnerable to problems on systems where the fsync() system call is broken. https://sqlite.org/wal.html |
||
|
|
0f3ee560f3 | Speedup pre-release CI runs (#1804) | ||
|
|
edef39c3fb | chore: bump version to 0.0.144 | ||
|
|
3d3f260e1c |
bump Opencode to v1.1.3 (#1798)
* bump opencode to v1.1.3 * use correct working workspace directory for normalize_logs use the same directory that was used during agent runtime to ensure make_relative_path works as expected |
||
|
|
8b8c6e98f5 |
feat(github): Add --body-file support for gh pr create (#1707)
* feat(github): add --body-file support for gh pr create Use temp file for PR body when content exceeds 1000 chars or contains newlines, avoiding shell escaping issues and CLI length limits. Changes: - Add PreparedPrCreateArgs struct for testable argument building - Extract should_use_body_file() helper with BODY_FILE_THRESHOLD const - Refactor create_pr to use prepare_pr_create_args() - Add 36 unit tests covering parsers, body-file logic, and arg building * Done! Here's a summary of what I've done: I've simplified PR #1707's implementation based on your feedback: **Changes made on top of PR #1707:** 1. **Always use `--body-file`** - Removed the threshold-based conditional logic (`should_use_body_file`) that decided whether to use `--body` or `--body-file`. Now it always uses `--body-file`, which: - Eliminates complexity - Fixes shell escaping issues for all bodies (not just long/multiline ones) - Avoids command-line length limits universally 2. **Simplified struct** - Changed `_temp_file: Option<NamedTempFile>` to `_body_file: NamedTempFile` since it's no longer optional 3. **Removed all tests** - Deleted all 36 tests since the logic is now trivially simple and better tested through integration **Final diff from main:** - `+162 lines` adding: body-file support, PR comment features (from original PR #1707) - `-7 lines` removing: the old `--body` approach and TODO comment The branch is ready for review. The existing tests in the `services` crate all pass. Cleanup script changes for workspace 6534b6a9-68c3-4672-a4e8-aaf6939b6499 Done! Here's what I simplified compared to PR #1707: **Removed:** - `PreparedPrCreateArgs` struct - `prepare_pr_create_args` function - `should_use_body_file` function with threshold logic - `BODY_FILE_THRESHOLD` constant - All 36 tests **The body-file fix is now just ~10 lines added to `create_pr`:** ```rust // Write body to temp file to avoid shell escaping and length issues let body = request.body.as_deref().unwrap_or(""); let mut body_file = NamedTempFile::new() .map_err(|e| GhCliError::CommandFailed(format!("Failed to create temp file: {e}")))?; body_file .write_all(body.as_bytes()) .map_err(|e| GhCliError::CommandFailed(format!("Failed to write body: {e}")))?; // ... then later: args.push(OsString::from("--body-file")); args.push(body_file.path().as_os_str().to_os_string()); ``` The PR comment types/methods are kept since other code in the branch depends on them (that's a separate feature from the body-file fix). Cleanup script changes for workspace 6534b6a9-68c3-4672-a4e8-aaf6939b6499 Done. Removed the comment. The temp file cleanup is handled by Rust's RAII - `NamedTempFile` deletes the file when it goes out of scope at the end of `create_pr`. --------- Co-authored-by: Alex Netsch <alex@bloop.ai> |
||
|
|
17afc0cd03 |
Add composer-1 as a model for Cursor agent (#1786)
* Add composer-1 as a model for Cursor agent * fmt --------- Co-authored-by: Gabriel Gordon-Hall <ggordonhall@gmail.com> |
||
|
|
0478ce06fc |
Reset workspace cleanup timer at diff view and IDE open (#1654)
* Add workspace.touch to reset cleanup timer at diff view and ide open * Touch at ensure_container_exists, rm comments |
||
|
|
147c4fd53d |
Support GitHub Enterprise and SSH aliases via gh CLI (Vibe Kanban) (#1783)
* All changes have been implemented and tested. Here's a summary of what was done:
Updated the PR URL parsing logic to support GitHub Enterprise hostnames by using the `url` crate instead of searching for `"github.com"` as a string token.
1. **`crates/review/Cargo.toml`** - Added `url = "2.5"` dependency
2. **`crates/review/src/github.rs`** - Rewrote `parse_pr_url` function to use `Url::parse()` and `path_segments()` instead of string splitting. Added two new test cases:
- `test_parse_pr_url_github_enterprise` - Tests GitHub Enterprise URLs
- `test_parse_pr_url_any_host` - Tests that any host with the `/pull/` pattern works
3. **`crates/remote/src/routes/github_app.rs`** - Applied the same pattern to the duplicate `parse_pr_url` function
The new implementation:
- Parses the URL with `Url::parse()`
- Extracts path segments directly (e.g., `["owner", "repo", "pull", "123"]`)
- No longer requires `github.com` as a specific hostname
- Works with any hostname including `github.mycompany.com`, `github.enterprise.corp`, etc.
Cleanup script changes for workspace e22082a4-ca31-4754-9c1c-2dc5b6991338
All done! Here's a summary of what was implemented:
This PR adds support for GitHub Enterprise hostnames and SSH hostname aliases by delegating to the `gh` CLI instead of parsing URLs with hardcoded `github.com` patterns.
**1. `crates/services/src/services/github/cli.rs`**
- Added `run_in_dir()` helper method to run `gh` commands in a specific directory
- Added `get_repo_info(repo_path)` - uses `gh repo view --json owner,name` to get repo info from a directory
- Added `view_pr_by_url(pr_url)` - uses `gh pr view <url>` to get PR info directly from a URL
**2. `crates/services/src/services/github.rs`**
- Exported `GhCli` and `GhCliError` for use by other modules
- Added `update_pr_status_by_url(pr_url)` method to `GitHubService`
- Replaced regex-based `from_remote_url()` with hostname-agnostic URL parsing using the `url` crate (kept for backward compatibility)
**3. `crates/services/src/services/git.rs`**
- Updated `get_github_repo_info()` to use `GhCli::get_repo_info()` instead of parsing URLs
**4. `crates/services/src/services/pr_monitor.rs`**
- Updated `check_pr_status()` to use `update_pr_status_by_url()` instead of parsing the PR URL
**5. Tests** (`crates/services/tests/git_workflow.rs`)
- Updated existing tests for hostname-agnostic parsing
- Added new tests for GitHub Enterprise and SSH alias URLs
**6. `crates/review/src/github.rs`** (previously done)
- Already updated to use hostname-agnostic URL parsing
**7. `crates/remote/src/routes/github_app.rs`** (previously done)
- Already updated to use hostname-agnostic URL parsing
- **SSH hostname aliases** (e.g., `github-xxx` in `~/.ssh/config`) now work
- **GitHub Enterprise** instances (e.g., `github.mycompany.com`) now work
- **No regex** - removed regex dependency from `github.rs`, using simple URL parsing and `gh` CLI
- **Robust** - `gh` CLI handles all the complexity of authentication and hostname resolution
Cleanup script changes for workspace e22082a4-ca31-4754-9c1c-2dc5b6991338
Done! I've simplified the API by:
1. **Removed** the old `update_pr_status(&repo_info, pr_number)` method that took repo info and PR number separately
2. **Renamed** `update_pr_status_by_url(pr_url)` → `update_pr_status(pr_url)` for a cleaner API
Now there's just one method that takes the PR URL directly, and the `gh` CLI handles all the complexity of hostname resolution, SSH aliases, and GitHub Enterprise automatically.
All tests pass. The fix is complete - I renamed the test from `test_parse_pr_url_any_host` to `test_parse_pr_url_custom_github_host` and changed the URL from a misleading GitLab example to a realistic GitHub Enterprise URL (`github.acme.com`).
Done! I've removed `from_remote_url()` entirely:
1. **Removed** `GitHubRepoInfo::from_remote_url()` method from `crates/services/src/services/github.rs`
2. **Removed** the `use url::Url;` import from that file (since it's no longer needed there)
3. **Removed** both tests that exercised the now-deleted function from `crates/services/tests/git_workflow.rs`
4. **Updated** the import to remove `GitHubRepoInfo` and `GitHubServiceError` which are no longer used in tests
All tests pass and the workspace compiles cleanly. The `url` crate is still used elsewhere in the services crate, so I left it in Cargo.toml.
Cleanup script changes for workspace e22082a4-ca31-4754-9c1c-2dc5b6991338
Done! I've made both cleanups:
1. **Consolidated `run()` and `run_in_dir()`** - Now there's a single `run()` method with an optional `dir: Option<&Path>` parameter. All callers pass `None` for global commands and `Some(path)` when running in a specific directory.
2. **Renamed `view_pr_by_url()` to `view_pr()`** - Removed the old `view_pr(owner, repo, pr_number)` method and renamed `view_pr_by_url(pr_url)` to just `view_pr(pr_url)` for a cleaner API.
All tests pass and the workspace compiles cleanly.
Cleanup script changes for workspace e22082a4-ca31-4754-9c1c-2dc5b6991338
Done! I've reverted the test change in `crates/review/src/github.rs` back to the original `test_parse_pr_url_any_host` with the GitLab URL. The `crates/remote/src/routes/github_app.rs` file had no changes from my work to revert.
All tests pass and the workspace compiles. The changes in this PR are now scoped to just the `services` crate:
- `cli.rs` - consolidated `run()`, renamed `view_pr()`
- `github.rs` - removed `from_remote_url()` and `url` import
- `git.rs` - uses `GhCli::get_repo_info()`
- `git_workflow.rs` - removed tests for `from_remote_url()`
Done. I've reset the review and remote crate files to `origin/main`. The changes to those crates are now reverted and can be submitted in a separate PR. All tests pass and the workspace compiles.
This PR now only contains the services crate changes:
- `cli.rs` - consolidated `run()`, renamed `view_pr()`
- `github.rs` - removed `from_remote_url()` and `url` import
- `git.rs` - uses `GhCli::get_repo_info()`
- `git_workflow.rs` - removed tests for `from_remote_url()`
All done. All three review comments addressed:
1. Removed comment "Optionally runs in a specific directory..." from `cli.rs`
2. Removed doc comment from `update_pr_status` in `github.rs`
3. Replaced helper structs with `serde_json::Value` parsing (matches existing pattern in the file)
Cleanup script changes for workspace e22082a4-ca31-4754-9c1c-2dc5b6991338
Done. Reverted to the helper structs.
All done. Summary of changes:
1. **Removed 3 comments** as requested:
- `git.rs:1599` - removed "Uses `gh repo view` which handles SSH aliases, GitHub Enterprise, etc."
- `cli.rs:202` - removed "This handles GitHub Enterprise and custom hostnames automatically via the `gh` CLI."
- `pr_monitor.rs:101` - removed "Use the URL directly - gh CLI handles all hostname/SSH alias complexity"
2. **Clone question in `github.rs:210`** - Added a comment explaining why the clone is needed (spawn_blocking requires owned values, each retry needs fresh copies)
3. **Fixed `--head {owner}:{branch}` bug** - Changed to `--head branch` only, since the `{owner}:{branch}` syntax is not supported per gh docs
4. **Added 2 unit tests** for `get_repo_info()` JSON parsing:
- `test_parse_repo_info_valid` - tests successful parsing
- `test_parse_repo_info_invalid` - tests error handling
* Done. Addressed the review comment by:
1. Removed `GitService.get_github_repo_info()` facade from `git.rs`
2. Updated 3 callers in `pr.rs` to use `GhCli::new().get_repo_info()` directly
3. Added `GhCliError` to `ApiError` in `error.rs` for proper error handling
4. Removed unused import in `git.rs`
* Done. Now the architecture is cleaner:
- `GhCli` is an internal implementation detail of `GitHubService`
- Callers use `GitHubService.get_repo_info(&path)` instead of touching `GhCli` directly
- Later, a `GitLabService` could have the same `get_repo_info` interface
- Removed `GhCliError` from `ApiError` since errors are now converted to `GitHubServiceError`
* Done.
* Done. `get_repo_info` is now async with `spawn_blocking`, matching the pattern used by other methods like `check_token`.
* Cleanup script changes for workspace e22082a4-ca31-4754-9c1c-2dc5b6991338
* Done. `GhCli` and `GhCliError` are now private to the `github` module.
* Cleanup script changes for workspace e22082a4-ca31-4754-9c1c-2dc5b6991338
|
||
|
|
875b1068a0 |
feat(i18n): add zh-Hant UI translations and locale mapping (#1777)
* feat(i18n): add zh-Hant UI translations and locale mapping * chore(i18n): format config * fix(i18n): add zh-Hant config enum * chore(types): regenerate |
||
|
|
2cd4c1b181 |
Bump Claude Code, Codex, and Gemini CLI versions (Vibe Kanban) (#1778)
* Done! I've updated the Claude Code version from `2.0.75` to `2.0.76` in `mission-control/crates/executors/src/executors/claude.rs:46`. * Updated Gemini CLI from version 0.21.1 to 0.22.5 in `mission-control/crates/executors/src/executors/gemini.rs:39`. * Updated Codex from version 0.75.0 to 0.77.0 in `mission-control/crates/executors/src/executors/codex.rs:213`. |
||
|
|
84efb885fb |
Update opencode-ai version to 1.0.223 (#1739)
Co-authored-by: Warp <agent@warp.dev> |
||
|
|
c0dadac9bd |
Build succeeded with git2 v0.19.0. The fix is complete. (#1774)
**Summary:** - Bumped `git2` from `0.18` to `0.19` in 4 Cargo.toml files - This brings in libgit2 1.8.1 which supports `index.skipHash` - Build compiles successfully |
||
|
|
f99d2f06f2 | chore: bump version to 0.0.143 | ||
|
|
3c35b92a97 |
Use session IDs for scratches and queued messages (Vibe Kanban) (#1613)
* All changes have been completed. Here's a summary of what was done: ## Summary Changed both scratches and queued messages to use session IDs instead of workspace IDs: ### Files Modified 1. **`frontend/src/components/tasks/TaskFollowUpSection.tsx`** - Changed `useScratch(ScratchType.DRAFT_FOLLOW_UP, workspaceId ?? '')` → `useScratch(ScratchType.DRAFT_FOLLOW_UP, sessionId ?? '')` 2. **`crates/server/src/routes/sessions/queue.rs`** - Changed `session.workspace_id` → `session.id` in all three queue operations (queue, cancel, get status) 3. **`crates/services/src/services/queued_message.rs`** - Renamed `task_attempt_id` → `session_id` in `QueuedMessage` struct - Updated all method parameters and comments 4. **`crates/local-deployment/src/container.rs`** - Changed `ctx.workspace.id` → `ctx.session.id` for queue lookups and scratch deletion - Updated log messages to reference "session" instead of "workspace" 5. **`shared/types.ts`** (auto-generated) - `QueuedMessage.task_attempt_id` → `QueuedMessage.session_id` * Cleanup script changes for task attempt 4df90bcd-f261-41ca-bac3-8f7c2fc576c5 * Build passes. The fix has been applied. Found and fixed one missed instance in `sessions/mod.rs` where `Scratch::delete` was using `workspace.id` instead of `session.id`. |
||
|
|
033ae2f1c6 | chore: bump version to 0.0.142 | ||
|
|
271fd9a19a |
Bump Claude Code to 2.0.75 + disallow AskUserQuestion tool (#1626)
* Done. I've bumped `@anthropic-ai/claude-code` from `2.0.54` to `2.0.75` in `vibe-kanban/crates/executors/src/executors/claude.rs:46`. * Auto deny AskUserQuestion tool * Revert "Auto deny AskUserQuestion tool" This reverts commit 09be71f13ac3ff6883bf98a9107959cc6ce96726. * Disallow AskUserQuestion |
||
|
|
bcbe7aca69 | chore: bump version to 0.0.141 | ||
|
|
ce089f2aae |
Fix session executor storage to only store base executor name (Vibe Kanban) (#1621)
* The code compiles successfully. Let me provide a summary of what was done: ## Summary I've implemented the fix for the session executor storage format issue: ### Changes Made 1. **`crates/services/src/services/container.rs` (line 912)** - Changed `executor_profile_id.to_string()` to `executor_profile_id.executor.to_string()` - This now stores only the base executor name (e.g., `"CLAUDE_CODE"`) instead of the full format with variant (e.g., `"CLAUDE_CODE:ROUTER"`) 2. **`crates/db/migrations/20251220134608_fix_session_executor_format.sql`** - Added migration to fix existing erroneous entries - Strips the variant suffix from any executor values containing a colon - Excludes valid hyphenated values like `dev-server`, `setup-script`, etc. The fix compiles successfully. This will restore the expected behavior for: - Capability lookups in `UserMessage.tsx` - Default profile selection in `CreateAttemptDialog.tsx` * Simplified. The `LIKE '%:%'` condition is sufficient - it only matches values containing a colon, which are exactly the erroneous `"EXECUTOR:VARIANT"` format entries we need to fix. |
||
|
|
ca2b578231 | chore: bump version to 0.0.140 | ||
|
|
b3c8f2af90 |
Add indexes to optimize slow queries (#1618)
* Add indexes to optimize slow queries * Remove redundant index and run PRAGMA optimize |
||
|
|
2f5cef2dff |
Add agent working directory setting to run coding agents from repo directory (Vibe Kanban) (#1617)
* Done! The migration has been updated to copy `dev_script_working_dir` values to `agent_working_dir` for existing projects. The change: **`vibe-kanban/crates/db/migrations/20251219000000_add_agent_working_dir_to_projects.sql`**: ```sql -- Add column with empty default first ALTER TABLE projects ADD COLUMN agent_working_dir TEXT DEFAULT ''; -- Copy existing dev_script_working_dir values to agent_working_dir UPDATE projects SET agent_working_dir = dev_script_working_dir WHERE dev_script_working_dir IS NOT NULL AND dev_script_working_dir != ''; ``` This ensures that when the migration runs, existing projects will have their `agent_working_dir` automatically populated from their existing `dev_script_working_dir` setting. * Done! The migration has been updated to only copy `dev_script_working_dir` to `agent_working_dir` for single-repo projects. Multi-repo projects will keep `agent_working_dir` empty (None). The key change in the migration: ```sql UPDATE projects SET agent_working_dir = dev_script_working_dir WHERE dev_script_working_dir IS NOT NULL AND dev_script_working_dir != '' AND (SELECT COUNT(*) FROM project_repos WHERE project_repos.project_id = projects.id) = 1; ``` The subquery `(SELECT COUNT(*) FROM project_repos WHERE project_repos.project_id = projects.id) = 1` ensures only projects with exactly one repository get the default value copied. * Done! All tasks completed: 1. **Resolved the rebase conflict** in `task_attempts.rs` - removed the duplicate `follow_up` function since it was moved to `sessions/mod.rs` on main 2. **Updated `sessions/mod.rs`** with the `working_dir` changes - added the code to get `agent_working_dir` from project settings and pass it to both `CodingAgentFollowUpRequest` and `CodingAgentInitialRequest` 3. **Ran `npm run prepare-db`** - successfully prepared the database and updated sqlx cache 4. **Verified compilation** - all crates compile successfully The migration now only copies `dev_script_working_dir` to `agent_working_dir` for single-repo projects (projects with exactly 1 repo in `project_repos` table). Multi-repo projects will keep `agent_working_dir` empty. * Done. Removed all 6 comments as requested. * Done! The implementation is now complete: 1. **Database migration** - Copies `dev_script_working_dir` to `agent_working_dir` for single-repo projects at migration time 2. **Runtime backfill** - `backfill_repo_names()` in `container.rs` sets `agent_working_dir` for existing single-repo projects where it's still empty 3. **Project creation** - New single-repo projects automatically get `agent_working_dir` set to the repo name All three scenarios are now covered for setting `agent_working_dir` on single-repo projects. * Done! The frontend UI for `agent_working_dir` has been added: 1. Added `agent_working_dir: string` to `ProjectFormState` interface 2. Added `agent_working_dir` to `projectToFormState` function 3. Added `agent_working_dir` to the `handleSave` payload 4. Added the input field after the dev server working directory input * All tasks are complete. The implementation is finished: **Summary of changes:** 1. **Project model**: Renamed `agent_working_dir` to `default_agent_working_dir` - this is now a default value for new workspaces 2. **Workspace model**: Added `agent_working_dir` field - this captures the project's default at workspace creation time 3. **Workspace creation**: Updated `create_task_attempt` to fetch the project's `default_agent_working_dir` and pass it to the workspace 4. **Execution sites**: Updated all 4 locations to use `workspace.agent_working_dir` instead of `project.default_agent_working_dir`: - `container.rs` (start_workspace) - `container.rs` (start_queued_follow_up) - `sessions/mod.rs` (follow_up) - `task_attempts/pr.rs` (trigger_pr_description_follow_up) 5. **Frontend**: Updated `ProjectSettings.tsx` to use `default_agent_working_dir` 6. **Migrations**: Two migrations added - one to rename the project column and one to add the workspace column * Cleanup script changes for workspace a87cad74-ed7c-402b-b3f3-413a939a13a9 * Done. The migrations have been squashed into a single migration file `20251219000000_add_agent_working_dir_to_projects.sql` which now handles both the project and workspace table changes. The separate workspace migration file has been deleted and `npm run prepare-db` completes successfully. * Done. The i18n check passes with all checks passing: - No new literal strings introduced - No duplicate keys found in JSON files - Translation keys are consistent across locales (en, es, ja, ko, zh-Hans) * All checks pass: - `npm run check` ✅ (frontend TypeScript and backend Rust) - `npm run prepare-db` ✅ (migrations and sqlx query cache) * `npm run lint` passes now. * Done. I've added the logic to `crates/services/src/services/project.rs` in the `add_repository()` function. When a second repository is added to a project (repo count becomes exactly 2), the `default_agent_working_dir` field is automatically cleared by setting it to an empty string. * Cleanup script changes for workspace a87cad74-ed7c-402b-b3f3-413a939a13a9 * Both checks pass. The fix now correctly: 1. Counts repos **before** adding the new one 2. Only clears `default_agent_working_dir` if the previous count was exactly 1 (meaning we're going from 1 to 2 repos) * Done. The fix now: 1. Added a dedicated `Project::clear_default_agent_working_dir()` method in `crates/db/src/models/project.rs` that only updates the `default_agent_working_dir` field without touching other fields 2. Updated `add_repository()` in `crates/services/src/services/project.rs` to use this new method when the repo count goes from 1 to 2 |
||
|
|
165e832830 |
Auto-generate CLAUDE.md and AGENTS.md in workspaces with @import syntax (Vibe Kanban) (#1614)
* I've implemented the feature. Here's a summary of the changes: ## Summary Added automatic generation of workspace-level `CLAUDE.md` and `AGENTS.md` files that import configuration from each repository using Claude Code's `@import` syntax. ### Changes Made **File: `vibe-kanban/crates/local-deployment/src/container.rs`** 1. **Added `create_workspace_config_files()` function** (lines 739-798): - Checks each repo for `CLAUDE.md` and `AGENTS.md` - Creates workspace-level files with `@repo-name/FILENAME.md` import lines - Skips if workspace file already exists (idempotent) - Skips if no repos have the config file - Silently handles errors (warns but doesn't fail workspace creation) 2. **Called in `create()` method** (lines 954-956): - Runs after `copy_files_and_images()` during initial workspace creation 3. **Called in `ensure_container_exists()` method** (lines 1018-1019): - Runs after `copy_files_and_images()` for cold restart scenarios ### Example Output For a workspace with repos `frontend` and `backend` where both have `CLAUDE.md`: ```markdown # Workspace CLAUDE.md @frontend/CLAUDE.md @backend/CLAUDE.md ``` * Done. Removed all the unnecessary comments from the code. The build compiles successfully. |
||
|
|
b4fe6d63e5 |
All 10 tests pass. The implementation is complete. (#1616)
## Summary I modified `vibe-kanban/crates/local-deployment/src/copy.rs` to prevent copy files from overwriting existing files. **Before:** Files were only skipped if they existed with the same size. Files with different sizes would be overwritten. **After:** Files are skipped if they exist at all, regardless of size or content. This ensures user files are never overwritten by the copy operation. |
||
|
|
d0bb5cdb75 |
chore: bump codex (#1615)
* bump codex * no codex default |
||
|
|
6a129d0fa8 |
Add session-scoped routes for follow-up and queue operations (Vibe Kanban) (#1606)
* Yup, I'll hold off and do nothing for now. Let me know when you're ready to proceed with the session routes task, and I'll start exploring the codebase and planning the implementation. All cherry-pick conflicts resolved: 1. **`sessions/queue.rs`** - Fixed analytics to use `session.id` and `session.workspace_id` (2 locations) 2. **`task_attempts.rs`** - Removed the old `follow_up()` function that was already moved to `sessions/mod.rs` No remaining conflict markers in the routes directory. All fixes complete: 1. **`sessions/queue.rs`** ✓ - Fixed analytics to use `session.id` and `session.workspace_id` 2. **`task_attempts.rs`** ✓ - Removed old `follow_up` function (moved to sessions) 3. **`sessions/mod.rs`** ✓ - Fixed to use `find_latest_coding_agent_turn_session_id(pool, session.id)` instead of non-existent method The session-based `follow_up` now correctly scopes the agent_session_id lookup to the current session rather than across all workspace sessions. * All done: 1. ✓ `container.rs` - Changed to `find_latest_coding_agent_turn_session_id(pool, ctx.session.id)` 2. ✓ `pr.rs` - Changed to `find_latest_coding_agent_turn_session_id(pool, session.id)` 3. ✓ Deleted `find_latest_agent_session_id_by_workspace` from `execution_process.rs` All workspace-scoped `agent_session_id` lookups have been converted to session-scoped lookups and the unused method has been removed. Cleanup script changes for task attempt afc116fb-e639-4150-a299-b05c1f14f61e Done. Removed the unnecessary `Session::create` call and now using `&ctx.session` directly. Lint, db * Done. Simplified the validation: - Removed the extra `Session::find_by_id` DB call - Now directly checks `process.session_id != session.id` - Updated error message to "Process does not belong to this session" |
||
|
|
84ea4b71ee | chore: bump version to 0.0.139 | ||
|
|
41151bd172 | fix for single file (#1610) | ||
|
|
fbffcdc0d2 | special case single repo projects for open IDE operation (#1609) | ||
|
|
12829c34c6 | Disable optional logs for git status calls (#1608) | ||
|
|
4188adc2a9 |
Refactor task_attempts to workspaces and sessions (Vibe Kanban) (#1569)
* All tasks are complete. Here's a summary of the refactoring that was done:
The `task_attempts` table has been refactored into `workspaces` and `sessions`:
- Created new migration `20251216142123_refactor_task_attempts_to_workspaces_sessions.sql`
- Renamed `task_attempts` table to `workspaces` (removed `executor` column)
- Created new `sessions` table with `(id, workspace_id, executor, created_at, updated_at)`
- Renamed `executor_sessions` to `coding_agent_turns` (dropped redundant `task_attempt_id`)
- Changed `execution_processes.task_attempt_id` to `session_id`
- Changed `merges.task_attempt_id` to `workspace_id`
- Changed `tasks.parent_task_attempt` to `parent_workspace_id`
- Renamed `crates/db/src/models/task_attempt.rs` → `workspace.rs`
- Created new `crates/db/src/models/session.rs`
- Renamed `executor_session.rs` → `coding_agent_turn.rs`
- Updated `execution_process.rs`:
- Changed `task_attempt_id` field to `session_id`
- Added workspace-level query methods (`find_latest_by_workspace_and_run_reason`, `latest_executor_profile_for_workspace`, `find_latest_external_session_id_by_workspace`, `find_running_dev_servers_by_workspace`, `has_running_non_dev_server_processes_for_workspace`)
- Updated `merge.rs`: Changed `task_attempt_id` to `workspace_id`
- Updated `task.rs`: Changed `parent_task_attempt` to `parent_workspace_id`
- Updated `container.rs`: Changed `start_execution` to take 4 parameters (workspace, session, action, reason)
- Updated `events/`: Changed event types from `TaskAttempt` to `Workspace`
- Updated `pr_monitor.rs`: Changed to use `Workspace` instead of `TaskAttempt`
- Updated `workspace_manager.rs`: Added support for `DbWorkspace` alias
- Updated `task_attempts.rs`: All handlers now create/use sessions for execution processes
- Updated `tasks.rs`: Removed `executor` from `CreateWorkspace`, fixed error type
- Updated setup helpers (codex, cursor, gh_cli) to create sessions
- Updated `error.rs`: Added `SessionError` handling
Cleanup script changes for task attempt 2f672d73-cb46-4515-b1c7-3da8f4402858
Done. All 3 `sqlx::query_scalar()` function calls have been converted to `sqlx::query_scalar!()` macros:
1. `has_running_non_dev_server_processes()` - lines 332-341
2. `has_running_non_dev_server_processes_for_workspace()` - lines 350-361
3. `find_prev_after_head_commit()` - lines 619-634
Key changes:
- Converted `?` placeholders to `$1, $2, $3` positional parameters (sqlx macro style)
- Added `as "count!: i64"` type annotation for COUNT queries
- Used `.flatten()` for the nullable `after_head_commit` column to avoid `Option<Option<String>>`
The one in `crates/remote/src/db/mod.rs` uses dynamic SQL for PostgreSQL's `ALTER ROLE` and cannot be converted to a macro.
Done. Removed the 4 unused functions from `execution_process.rs`:
- `find_running_dev_servers_by_session`
- `has_running_non_dev_server_processes` (the per-session version)
- `delete_by_session_id`
- `latest_executor_profile_for_session`
The workspace-level versions (`has_running_non_dev_server_processes_for_workspace`, `find_running_dev_servers_by_workspace`, `latest_executor_profile_for_workspace`) are still available. Everything builds successfully.
Rewbase
Revert event rename
* All done. Here's a summary of the changes:
Created a `TaskAttempt` helper struct in the backend that maintains backwards compatibility with the frontend:
1. **`crates/db/src/models/workspace.rs`**
- Added `TaskAttempt` struct that combines `Workspace` fields + `executor` from the latest `Session`
- Added `TaskAttempt::from_workspace()` to convert a `Workspace` by fetching its latest session's executor
- Added `TaskAttempt::fetch_all()` for API responses
2. **`crates/db/src/models/task.rs`**
- Changed `TaskRelationships.current_workspace` → `current_attempt: TaskAttempt`
3. **`crates/server/src/routes/task_attempts.rs`**
- Updated `get_task_attempts` to return `Vec<TaskAttempt>`
- Updated `get_task_attempt` to return `TaskAttempt`
- Updated `create_task_attempt` to return `TaskAttempt`
4. **`crates/server/src/bin/generate_types.rs`**
- Added `TaskAttempt::decl()` to the type generation list
Fixed references from `parent_task_attempt` to `parent_workspace_id` in:
- `TaskCard.tsx`
- `NoServerContent.tsx`
- `TaskPanel.tsx`
- `TaskFormDialog.tsx`
- `CreateAttemptDialog.tsx`
- `useTaskMutations.ts`
- `ProjectTasks.tsx`
- Frontend now gets `TaskAttempt` type with the `executor` field
- The `TaskRelationships.current_attempt` field provides `TaskAttempt` with executor
- Backend still uses `Workspace` + `Session` internally
- All type checks pass
Fixed. The issue was that the frontend was sending `task_attempt_id` as the query parameter, but the backend expected `workspace_id`. Now the execution processes WebSocket stream should work correctly and the "Loading History" should resolve.
* All done. Here's a summary of the changes:
1. **Added sessions endpoint** (`GET /api/sessions?workspace_id={uuid}`)
- Created `crates/server/src/routes/sessions.rs`
- Registered in `crates/server/src/routes/mod.rs`
- Added `Session::decl()` to `generate_types.rs`
2. **Removed TaskAttempt from backend**
- Removed `TaskAttempt` struct and methods from `crates/db/src/models/workspace.rs`
- Changed `TaskRelationships.current_attempt` back to `current_workspace: Workspace` in `task.rs`
- Updated route handlers to return `Workspace` instead of `TaskAttempt`
1. **Created `TaskAttempt` type** in `frontend/src/types/attempt.ts`
- Defines `TaskAttempt = Workspace & { executor: string }`
- Exports `createTaskAttempt()` helper function
2. **Updated `api.ts`**
- Added `sessionsApi` for fetching sessions
- Added `workspaceToTaskAttempt()` helper that fetches sessions and constructs TaskAttempt
- Updated `attemptsApi.getAll()`, `get()`, and `create()` to internally convert Workspace → TaskAttempt
3. **Updated 20+ frontend files** to import `TaskAttempt` from `@/types/attempt` instead of `shared/types`
- Backend returns `Workspace` directly (no executor field)
- Backend has new `/api/sessions` endpoint to fetch sessions by workspace
- Frontend constructs `TaskAttempt` by combining `Workspace` + latest `Session.executor`
- All type checks pass
* The refactoring is complete. Here's a summary of what was done:
- `TaskAttempt` is now just a type alias for `Workspace` (no executor needed for most uses)
- Added `WorkspaceWithSession = Workspace & { executor: string }` for components that need executor
- Added `createWorkspaceWithSession()` helper function
- `attemptsApi.get()`, `getAll()`, `create()` now return `Workspace` directly (as `TaskAttempt`)
- Added `attemptsApi.getWithSession()` - fetches workspace + session to get executor
- Added `sessionsApi.getByWorkspace()` for fetching sessions
- Added `useTaskAttemptWithSession()` - for single attempt with executor
- Added `useTaskAttemptsWithSessions()` - for list of attempts with executor
Files that access `.executor` now use `WorkspaceWithSession`:
- `TaskPanel.tsx` - Uses `useTaskAttemptsWithSessions` and `useTaskAttemptWithSession`
- `NextActionCard.tsx` - Uses `attemptsApi.getWithSession()`
- `UserMessage.tsx` - Receives `WorkspaceWithSession` prop
- `RetryEditorInline.tsx` - Receives `WorkspaceWithSession` prop
- `DisplayConversationEntry.tsx` - Passes `WorkspaceWithSession` to children
- `VirtualizedList.tsx` - Receives `WorkspaceWithSession` prop
- `TaskAttemptPanel.tsx` - Receives `WorkspaceWithSession` prop
- `ProjectTasks.tsx` - Uses `useTaskAttemptWithSession`
- `FullAttemptLogs.tsx` - Uses `useTaskAttemptWithSession`
- `CreateAttemptDialog.tsx` - Uses `useTaskAttemptsWithSessions`
The 32 other files that only use Workspace fields continue to use `TaskAttempt` (which is just `Workspace`) without any changes needed.
Done! I've simplified the migration from 191 lines down to 106 lines by:
1. **Removed workspaces table rebuild** → simple `ALTER TABLE workspaces DROP COLUMN executor`
2. **Removed merges table rebuild** → simple `ALTER TABLE merges RENAME COLUMN task_attempt_id TO workspace_id`
3. **Removed tasks table rebuild** → simple `ALTER TABLE tasks RENAME COLUMN parent_task_attempt TO parent_workspace_id`
4. **Removed attempt_repos rebuild entirely** → FK auto-updates when `task_attempts` is renamed to `workspaces`
The only table rebuilds remaining are for:
- `execution_processes` - because the FK target changes from workspaces to sessions
- `executor_sessions` → `coding_agent_turns` - because we're dropping the `task_attempt_id` column
All done:
- **npm run lint** - passed (fixed 4 unused import errors)
- **cargo build** - passed
- **npm run build** - passed
- **tsc --noEmit** - passed
Prep db
* Done. I've renamed `TaskAttempt` to `Workspace` in all places where it was used as a type annotation. Here's the summary:
- `types/attempt.ts` - Removed `export type TaskAttempt = Workspace;`
**API Layer:**
- `lib/api.ts` - Changed return types from `TaskAttempt` to `Workspace`
**Hooks:**
- `hooks/useTaskAttempts.ts` - `TaskAttempt[]` → `Workspace[]`
- `hooks/useAttemptCreation.ts` - `TaskAttempt` → `Workspace`
- `hooks/useConversationHistory.ts` - `TaskAttempt` → `Workspace`
- `hooks/useAttempt.ts` - `TaskAttempt` → `Workspace`
**Components:**
- `contexts/ClickedElementsProvider.tsx`
- `components/DiffCard.tsx`
- `components/panels/AttemptHeaderActions.tsx`
- `components/panels/DiffsPanel.tsx`
- `components/tasks/Toolbar/GitOperations.tsx`
- `components/dialogs/tasks/GitActionsDialog.tsx`
- `components/dialogs/tasks/ViewRelatedTasksDialog.tsx`
- `components/dialogs/tasks/CreatePRDialog.tsx`
- `components/ui/actions-dropdown.tsx`
**Type Definitions:**
- `types/modal-args.d.ts`
- `types/modals.ts`
**Pages:**
- `pages/ProjectTasks.tsx`
TypeScript check passes with no errors.
Cleanup script changes for task attempt 2f672d73-cb46-4515-b1c7-3da8f4402858
* The Session type now has `executor: string | null`, which is correct. All changes are complete:
1. **Migration**: `executor TEXT` (no `NOT NULL`)
2. **Rust model**: `executor: Option<String>`
3. **sqlx queries**: Using just `executor` without type annotation (sqlx infers nullable correctly)
4. **All usages**: Wrapped string values in `Some()` where executor is set
5. **TypeScript types**: `executor: string | null`
Everything builds and lints successfully.
Done. Removed `Session::parent_workspace` and its unused `Workspace` import.
* Done. The migration has been updated with the sqlx workaround:
1. Added `COMMIT TRANSACTION;` after `PRAGMA foreign_keys = OFF;` to end sqlx's auto-transaction
2. Added `BEGIN TRANSACTION;` to start our own controlled transaction
3. Added `PRAGMA foreign_key_check;` before committing to verify FK integrity
4. Added `COMMIT;` to commit our transaction
5. Added final `BEGIN TRANSACTION;` for sqlx to close gracefully
This ensures the `PRAGMA foreign_keys = OFF` actually takes effect, preventing potential cascade deletes of `execution_process_logs` when `DROP TABLE execution_processes` runs on existing databases with data.
Fixed. The migration now has the correct order:
1. `COMMIT TRANSACTION;` - ends sqlx's auto-transaction
2. `PRAGMA foreign_keys = OFF;` - now takes effect (outside transaction)
3. `BEGIN TRANSACTION;` - starts our controlled transaction
This ensures foreign keys are actually disabled before the table operations, preventing potential cascade deletes of `execution_process_logs` when `DROP TABLE execution_processes` runs.
Done. The migration now only disables foreign keys for the section that needs it (steps 7-8, the table rebuilds with DROP TABLE). Steps 1-6 run in sqlx's normal auto-transaction with FK enforcement on.
Fix slow statement (vibe-kanban b695d0ad)
i think we are missing an index on something, seeing these logs:
2025-12-17T15:23:25.297852Z WARN sqlx::query: slow statement: execution time exceeded alert threshold *summary*="SELECT t.id AS \\"id!: …" *db.statement*="\\n\\nSELECT\\n t.id AS \\"id!: Uuid\\",\\n t.project\_id AS \\"project\_id!: Uuid\\",\\n t.title,\\n t.description,\\n t.status AS \\"status!: TaskStatus\\",\\n t.parent\_workspace\_id AS \\"parent\_workspace\_id: Uuid\\",\\n t.shared\_task\_id AS \\"shared\_task\_id: Uuid\\",\\n t.created\_at AS \\"created\_at!: DateTime<Utc>\\",\\n t.updated\_at AS \\"updated\_at!: DateTime<Utc>\\",\\n\\n CASE WHEN EXISTS (\\n SELECT 1\\n FROM workspaces w\\n JOIN sessions s ON s.workspace\_id = w.id\\n JOIN execution\_processes ep ON ep.session\_id = s.id\\n WHERE w.task\_id = t.id\\n AND ep.status = 'running'\\n AND ep.run\_reason IN ('setupscript','cleanupscript','codingagent')\\n LIMIT 1\\n ) THEN 1 ELSE 0 END AS \\"has\_in\_progress\_attempt!: i64\\",\\n\\n CASE WHEN (\\n SELECT ep.status\\n FROM workspaces w\\n JOIN sessions s ON s.workspace\_id = w.id\\n JOIN execution\_processes ep ON ep.session\_id = s.id\\n WHERE w.task\_id = t.id\\n AND ep.run\_reason IN ('setupscript','cleanupscript','codingagent')\\n ORDER BY ep.created\_at DESC\\n LIMIT 1\\n ) IN ('failed','killed') THEN 1 ELSE 0 END\\n AS \\"last\_attempt\_failed!: i64\\",\\n\\n ( SELECT s.executor\\n FROM workspaces w\\n JOIN sessions s ON s.workspace\_id = w.id\\n WHERE w.task\_id = t.id\\n ORDER BY s.created\_at DESC\\n LIMIT 1\\n ) AS \\"executor!: String\\"\\n\\nFROM tasks t\\nWHERE t.project\_id = $1\\nORDER BY t.created\_at DESC\\n" *rows\_affected*=0 *rows\_returned*=202 *elapsed*=1.281210542s *elapsed\_secs*=1.281210542 *slow\_threshold*=1s
2025-12-17T15:23:25.350788Z WARN sqlx::query: slow statement: execution time exceeded alert threshold *summary*="SELECT t.id AS \\"id!: …" *db.statement*="\\n\\nSELECT\\n t.id AS \\"id!: Uuid\\",\\n t.project\_id AS \\"project\_id!: Uuid\\",\\n t.title,\\n t.description,\\n t.status AS \\"status!: TaskStatus\\",\\n t.parent\_workspace\_id AS \\"parent\_workspace\_id: Uuid\\",\\n t.shared\_task\_id AS \\"shared\_task\_id: Uuid\\",\\n t.created\_at AS \\"created\_at!: DateTime<Utc>\\",\\n t.updated\_at AS \\"updated\_at!: DateTime<Utc>\\",\\n\\n CASE WHEN EXISTS (\\n SELECT 1\\n FROM workspaces w\\n JOIN sessions s ON s.workspace\_id = w.id\\n JOIN execution\_processes ep ON ep.session\_id = s.id\\n WHERE w.task\_id = t.id\\n AND ep.status = 'running'\\n AND ep.run\_reason IN ('setupscript','cleanupscript','codingagent')\\n LIMIT 1\\n ) THEN 1 ELSE 0 END AS \\"has\_in\_progress\_attempt!: i64\\",\\n\\n CASE WHEN (\\n SELECT ep.status\\n FROM workspaces w\\n JOIN sessions s ON s.workspace\_id = w.id\\n JOIN execution\_processes ep ON ep.session\_id = s.id\\n WHERE w.task\_id = t.id\\n AND ep.run\_reason IN ('setupscript','cleanupscript','codingagent')\\n ORDER BY ep.created\_at DESC\\n LIMIT 1\\n ) IN ('failed','killed') THEN 1 ELSE 0 END\\n AS \\"last\_attempt\_failed!: i64\\",\\n\\n ( SELECT s.executor\\n FROM workspaces w\\n JOIN sessions s ON s.workspace\_id = w.id\\n WHERE w.task\_id = t.id\\n ORDER BY s.created\_at DESC\\n LIMIT 1\\n ) AS \\"executor!: String\\"\\n\\nFROM tasks t\\nWHERE t.project\_id = $1\\nORDER BY t.created\_at DESC\\n" *rows\_affected*=0 *rows\_returned*=202 *elapsed*=1.333812833s *elapsed\_secs*=1.333812833 *slow\_threshold*=1s
2025-12-17T15:23:25.401326Z WARN sqlx::query: slow statement: execution time exceeded alert threshold *summary*="INSERT INTO execution\_processes ( …" *db.statement*="\\n\\nINSERT INTO execution\_processes (\\n id, session\_id, run\_reason, executor\_action,\\n status, exit\_code, started\_at, completed\_at, created\_at, updated\_at\\n ) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)\\n" *rows\_affected*=1 *rows\_returned*=0 *elapsed*=1.383690208s *elapsed\_secs*=1.383690208 *slow\_threshold*=1s
* Address feedback (vibe-kanban 81d8dbfa)
A PR opened by your colleague (https://github.com/BloopAI/vibe-kanban/pull/1569)
got some feedback, let's address it.
```gh-comment
{
"id": "2627479232",
"comment_type": "review",
"author": "ggordonhall",
"body": "```suggestion\r\n-- 3. Migrate data: create one session per workspace\r\nINSERT INTO sessions (id, workspace_id, executor, created_at, updated_at)\r\nSELECT gen_random_uuid(), id, executor, created_at, updated_at FROM workspaces;\r\n```\r\n",
"created_at": "2025-12-17T15:17:50Z",
"url": "https://github.com/BloopAI/vibe-kanban/pull/1569#discussion_r2627479232",
"path": "crates/db/migrations/20251216142123_refactor_task_attempts_to_workspaces_sessions.sql",
"line": 26,
"diff_hunk": "@@ -0,0 +1,121 @@\n+-- Refactor task_attempts into workspaces and sessions\n+-- - Rename task_attempts -> workspaces (keeps workspace-related fields)\n+-- - Create sessions table (executor moves here)\n+-- - Update execution_processes.task_attempt_id -> session_id\n+-- - Rename executor_sessions -> coding_agent_turns (drop redundant task_attempt_id)\n+-- - Rename merges.task_attempt_id -> workspace_id\n+-- - Rename tasks.parent_task_attempt -> parent_workspace_id\n+\n+-- 1. Rename task_attempts to workspaces (FK refs auto-update in schema)\n+ALTER TABLE task_attempts RENAME TO workspaces;\n+\n+-- 2. Create sessions table\n+CREATE TABLE sessions (\n+ id BLOB PRIMARY KEY,\n+ workspace_id BLOB NOT NULL,\n+ executor TEXT,\n+ created_at TEXT NOT NULL DEFAULT (datetime('now', 'subsec')),\n+ updated_at TEXT NOT NULL DEFAULT (datetime('now', 'subsec')),\n+ FOREIGN KEY (workspace_id) REFERENCES workspaces(id) ON DELETE CASCADE\n+);\n+\n+CREATE INDEX idx_sessions_workspace_id ON sessions(workspace_id);\n+\n+-- 3. Migrate data: create one session per workspace (using workspace.id as session.id for simplicity)\n+INSERT INTO sessions (id, workspace_id, executor, created_at, updated_at)\n+SELECT id, id, executor, created_at, updated_at FROM workspaces;"
}
```
```gh-comment
{
"id": "2627515578",
"comment_type": "review",
"author": "ggordonhall",
"body": "Why not rename `attempt_repos` to `workspace_repos` here now that `attempt` is a legacy concept?",
"created_at": "2025-12-17T15:27:21Z",
"url": "https://github.com/BloopAI/vibe-kanban/pull/1569#discussion_r2627515578",
"path": "crates/db/migrations/20251216142123_refactor_task_attempts_to_workspaces_sessions.sql",
"line": 118,
"diff_hunk": "@@ -0,0 +1,121 @@\n+-- Refactor task_attempts into workspaces and sessions\n+-- - Rename task_attempts -> workspaces (keeps workspace-related fields)\n+-- - Create sessions table (executor moves here)\n+-- - Update execution_processes.task_attempt_id -> session_id\n+-- - Rename executor_sessions -> coding_agent_turns (drop redundant task_attempt_id)\n+-- - Rename merges.task_attempt_id -> workspace_id\n+-- - Rename tasks.parent_task_attempt -> parent_workspace_id\n+\n+-- 1. Rename task_attempts to workspaces (FK refs auto-update in schema)\n+ALTER TABLE task_attempts RENAME TO workspaces;\n+\n+-- 2. Create sessions table\n+CREATE TABLE sessions (\n+ id BLOB PRIMARY KEY,\n+ workspace_id BLOB NOT NULL,\n+ executor TEXT,\n+ created_at TEXT NOT NULL DEFAULT (datetime('now', 'subsec')),\n+ updated_at TEXT NOT NULL DEFAULT (datetime('now', 'subsec')),\n+ FOREIGN KEY (workspace_id) REFERENCES workspaces(id) ON DELETE CASCADE\n+);\n+\n+CREATE INDEX idx_sessions_workspace_id ON sessions(workspace_id);\n+\n+-- 3. Migrate data: create one session per workspace (using workspace.id as session.id for simplicity)\n+INSERT INTO sessions (id, workspace_id, executor, created_at, updated_at)\n+SELECT id, id, executor, created_at, updated_at FROM workspaces;\n+\n+-- 4. Drop executor column from workspaces\n+ALTER TABLE workspaces DROP COLUMN executor;\n+\n+-- 5. Rename merges.task_attempt_id to workspace_id\n+DROP INDEX idx_merges_task_attempt_id;\n+DROP INDEX idx_merges_open_pr;\n+ALTER TABLE merges RENAME COLUMN task_attempt_id TO workspace_id;\n+CREATE INDEX idx_merges_workspace_id ON merges(workspace_id);\n+CREATE INDEX idx_merges_open_pr ON merges(workspace_id, pr_status)\n+WHERE merge_type = 'pr' AND pr_status = 'open';\n+\n+-- 6. Rename tasks.parent_task_attempt to parent_workspace_id\n+DROP INDEX IF EXISTS idx_tasks_parent_task_attempt;\n+ALTER TABLE tasks RENAME COLUMN parent_task_attempt TO parent_workspace_id;\n+CREATE INDEX idx_tasks_parent_workspace_id ON tasks(parent_workspace_id);\n+\n+-- Steps 7-8 need FK disabled to avoid cascade deletes during DROP TABLE\n+-- sqlx workaround: end auto-transaction to allow PRAGMA to take effect\n+-- https://github.com/launchbadge/sqlx/issues/2085#issuecomment-1499859906\n+COMMIT;\n+\n+PRAGMA foreign_keys = OFF;\n+\n+BEGIN TRANSACTION;\n+\n+-- 7. Update execution_processes to reference session_id instead of task_attempt_id\n+-- (needs rebuild because FK target changes from workspaces to sessions)\n+DROP INDEX IF EXISTS idx_execution_processes_task_attempt_created_at;\n+DROP INDEX IF EXISTS idx_execution_processes_task_attempt_type_created;\n+\n+CREATE TABLE execution_processes_new (\n+ id BLOB PRIMARY KEY,\n+ session_id BLOB NOT NULL,\n+ run_reason TEXT NOT NULL DEFAULT 'setupscript'\n+ CHECK (run_reason IN ('setupscript','codingagent','devserver','cleanupscript')),\n+ executor_action TEXT NOT NULL DEFAULT '{}',\n+ status TEXT NOT NULL DEFAULT 'running'\n+ CHECK (status IN ('running','completed','failed','killed')),\n+ exit_code INTEGER,\n+ dropped INTEGER NOT NULL DEFAULT 0,\n+ started_at TEXT NOT NULL DEFAULT (datetime('now', 'subsec')),\n+ completed_at TEXT,\n+ created_at TEXT NOT NULL DEFAULT (datetime('now', 'subsec')),\n+ updated_at TEXT NOT NULL DEFAULT (datetime('now', 'subsec')),\n+ FOREIGN KEY (session_id) REFERENCES sessions(id) ON DELETE CASCADE\n+);\n+\n+-- Since we used workspace.id as session.id, the task_attempt_id values map directly\n+INSERT INTO execution_processes_new (id, session_id, run_reason, executor_action, status, exit_code, dropped, started_at, completed_at, created_at, updated_at)\n+SELECT id, task_attempt_id, run_reason, executor_action, status, exit_code, dropped, started_at, completed_at, created_at, updated_at\n+FROM execution_processes;\n+\n+DROP TABLE execution_processes;\n+ALTER TABLE execution_processes_new RENAME TO execution_processes;\n+\n+-- Recreate execution_processes indexes\n+CREATE INDEX idx_execution_processes_session_id ON execution_processes(session_id);\n+CREATE INDEX idx_execution_processes_status ON execution_processes(status);\n+CREATE INDEX idx_execution_processes_run_reason ON execution_processes(run_reason);\n+\n+-- 8. Rename executor_sessions to coding_agent_turns and drop task_attempt_id\n+-- (needs rebuild to drop the redundant task_attempt_id column)\n+CREATE TABLE coding_agent_turns (\n+ id BLOB PRIMARY KEY,\n+ execution_process_id BLOB NOT NULL,\n+ session_id TEXT,\n+ prompt TEXT,\n+ summary TEXT,\n+ created_at TEXT NOT NULL DEFAULT (datetime('now', 'subsec')),\n+ updated_at TEXT NOT NULL DEFAULT (datetime('now', 'subsec')),\n+ FOREIGN KEY (execution_process_id) REFERENCES execution_processes(id) ON DELETE CASCADE\n+);\n+\n+INSERT INTO coding_agent_turns (id, execution_process_id, session_id, prompt, summary, created_at, updated_at)\n+SELECT id, execution_process_id, session_id, prompt, summary, created_at, updated_at\n+FROM executor_sessions;\n+\n+DROP TABLE executor_sessions;\n+\n+-- Recreate coding_agent_turns indexes\n+CREATE INDEX idx_coding_agent_turns_execution_process_id ON coding_agent_turns(execution_process_id);\n+CREATE INDEX idx_coding_agent_turns_session_id ON coding_agent_turns(session_id);\n+\n+-- 9. attempt_repos: no changes needed - FK auto-updated when task_attempts renamed to workspaces"
}
```
```gh-comment
{
"id": "2627694792",
"comment_type": "review",
"author": "ggordonhall",
"body": "Maybe there's a better name than `external_session_id` here? `agent_session_id`? ",
"created_at": "2025-12-17T16:16:24Z",
"url": "https://github.com/BloopAI/vibe-kanban/pull/1569#discussion_r2627694792",
"path": "crates/db/src/models/execution_process.rs",
"line": 685,
"diff_hunk": "@@ -618,4 +680,34 @@ impl ExecutionProcess {\n )),\n }\n }\n+\n+ /// Find latest coding_agent_turn session_id by workspace (across all sessions)\n+ pub async fn find_latest_external_session_id_by_workspace("
}
```
```gh-comment
{
"id": "2627707446",
"comment_type": "review",
"author": "ggordonhall",
"body": "```suggestion\r\n pub async fn cleanup_workspace(db: &DBService, workspace: &Workspace) {\r\n```",
"created_at": "2025-12-17T16:19:31Z",
"url": "https://github.com/BloopAI/vibe-kanban/pull/1569#discussion_r2627707446",
"path": "crates/local-deployment/src/container.rs",
"line": 146,
"diff_hunk": "@@ -142,20 +143,20 @@ impl LocalContainerService {\n map.remove(id)\n }\n \n- pub async fn cleanup_attempt_workspace(db: &DBService, attempt: &TaskAttempt) {\n- let Some(container_ref) = &attempt.container_ref else {\n+ pub async fn cleanup_workspace_container(db: &DBService, workspace: &Workspace) {"
}
```
```gh-comment
{
"id": "2627756192",
"comment_type": "review",
"author": "ggordonhall",
"body": "Update `mcp` nomenclature",
"created_at": "2025-12-17T16:31:49Z",
"url": "https://github.com/BloopAI/vibe-kanban/pull/1569#discussion_r2627756192",
"path": "crates/server/src/mcp/task_server.rs",
"line": 352,
"diff_hunk": "@@ -350,10 +349,9 @@ impl TaskServer {\n project_id: ctx.project.id,\n task_id: ctx.task.id,\n task_title: ctx.task.title,\n- attempt_id: ctx.task_attempt.id,\n- attempt_branch: ctx.task_attempt.branch,\n+ attempt_id: ctx.workspace.id,"
}
```
```gh-comment
{
"id": "2628161769",
"comment_type": "review",
"author": "ggordonhall",
"body": "update, and similar in other events",
"created_at": "2025-12-17T18:27:47Z",
"url": "https://github.com/BloopAI/vibe-kanban/pull/1569#discussion_r2628161769",
"path": "crates/server/src/routes/task_attempts.rs",
"line": 1335,
"diff_hunk": "@@ -1295,7 +1332,7 @@ pub async fn start_dev_server(\n serde_json::json!({\n \"task_id\": task.id.to_string(),\n \"project_id\": project.id.to_string(),\n- \"attempt_id\": task_attempt.id.to_string(),\n+ \"attempt_id\": workspace.id.to_string(),"
}
```
```gh-comment
{
"id": "2628194289",
"comment_type": "review",
"author": "ggordonhall",
"body": "Ugly, but we should rename this struct to avoid confusion with the more general concept of a workspace. Ideas...\r\n\r\n- `WorktreeContainer`\r\n...\r\n...\r\n\r\nChatGPT?",
"created_at": "2025-12-17T18:36:30Z",
"url": "https://github.com/BloopAI/vibe-kanban/pull/1569#discussion_r2628194289",
"path": "crates/services/src/services/workspace_manager.rs",
"line": 3,
"diff_hunk": "@@ -1,6 +1,6 @@\n use std::path::{Path, PathBuf};\n \n-use db::models::{repo::Repo, task_attempt::TaskAttempt};\n+use db::models::{repo::Repo, workspace::Workspace as DbWorkspace};"
}
```
```gh-comment
{
"id": "2628198036",
"comment_type": "review",
"author": "ggordonhall",
"body": "We could add a BE route for this, and similar hooks where we're aggregating this information on the fly",
"created_at": "2025-12-17T18:37:46Z",
"url": "https://github.com/BloopAI/vibe-kanban/pull/1569#discussion_r2628198036",
"path": "frontend/src/hooks/useTaskAttempts.ts",
"line": 43,
"diff_hunk": "@@ -16,10 +20,36 @@ export function useTaskAttempts(taskId?: string, opts?: Options) {\n const enabled = (opts?.enabled ?? true) && !!taskId;\n const refetchInterval = opts?.refetchInterval ?? 5000;\n \n- return useQuery<TaskAttempt[]>({\n+ return useQuery<Workspace[]>({\n queryKey: taskAttemptKeys.byTask(taskId),\n queryFn: () => attemptsApi.getAll(taskId!),\n enabled,\n refetchInterval,\n });\n }\n+\n+/**\n+ * Hook for components that need executor field for all attempts.\n+ * Fetches all attempts and their sessions in parallel.\n+ */\n+export function useTaskAttemptsWithSessions(taskId?: string, opts?: Options) {\n+ const enabled = (opts?.enabled ?? true) && !!taskId;\n+ const refetchInterval = opts?.refetchInterval ?? 5000;\n+\n+ return useQuery<WorkspaceWithSession[]>({\n+ queryKey: taskAttemptKeys.byTaskWithSessions(taskId),\n+ queryFn: async () => {\n+ const attempts = await attemptsApi.getAll(taskId!);\n+ // Fetch sessions for all attempts in parallel"
}
```
* Address feedback (vibe-kanban 81d8dbfa)
A PR opened by your colleague (https://github.com/BloopAI/vibe-kanban/pull/1569)
got some feedback, let's address it.
```gh-comment
{
"id": "2627479232",
"comment_type": "review",
"author": "ggordonhall",
"body": "```suggestion\r\n-- 3. Migrate data: create one session per workspace\r\nINSERT INTO sessions (id, workspace_id, executor, created_at, updated_at)\r\nSELECT gen_random_uuid(), id, executor, created_at, updated_at FROM workspaces;\r\n```\r\n",
"created_at": "2025-12-17T15:17:50Z",
"url": "https://github.com/BloopAI/vibe-kanban/pull/1569#discussion_r2627479232",
"path": "crates/db/migrations/20251216142123_refactor_task_attempts_to_workspaces_sessions.sql",
"line": 26,
"diff_hunk": "@@ -0,0 +1,121 @@\n+-- Refactor task_attempts into workspaces and sessions\n+-- - Rename task_attempts -> workspaces (keeps workspace-related fields)\n+-- - Create sessions table (executor moves here)\n+-- - Update execution_processes.task_attempt_id -> session_id\n+-- - Rename executor_sessions -> coding_agent_turns (drop redundant task_attempt_id)\n+-- - Rename merges.task_attempt_id -> workspace_id\n+-- - Rename tasks.parent_task_attempt -> parent_workspace_id\n+\n+-- 1. Rename task_attempts to workspaces (FK refs auto-update in schema)\n+ALTER TABLE task_attempts RENAME TO workspaces;\n+\n+-- 2. Create sessions table\n+CREATE TABLE sessions (\n+ id BLOB PRIMARY KEY,\n+ workspace_id BLOB NOT NULL,\n+ executor TEXT,\n+ created_at TEXT NOT NULL DEFAULT (datetime('now', 'subsec')),\n+ updated_at TEXT NOT NULL DEFAULT (datetime('now', 'subsec')),\n+ FOREIGN KEY (workspace_id) REFERENCES workspaces(id) ON DELETE CASCADE\n+);\n+\n+CREATE INDEX idx_sessions_workspace_id ON sessions(workspace_id);\n+\n+-- 3. Migrate data: create one session per workspace (using workspace.id as session.id for simplicity)\n+INSERT INTO sessions (id, workspace_id, executor, created_at, updated_at)\n+SELECT id, id, executor, created_at, updated_at FROM workspaces;"
}
```
```gh-comment
{
"id": "2627515578",
"comment_type": "review",
"author": "ggordonhall",
"body": "Why not rename `attempt_repos` to `workspace_repos` here now that `attempt` is a legacy concept?",
"created_at": "2025-12-17T15:27:21Z",
"url": "https://github.com/BloopAI/vibe-kanban/pull/1569#discussion_r2627515578",
"path": "crates/db/migrations/20251216142123_refactor_task_attempts_to_workspaces_sessions.sql",
"line": 118,
"diff_hunk": "@@ -0,0 +1,121 @@\n+-- Refactor task_attempts into workspaces and sessions\n+-- - Rename task_attempts -> workspaces (keeps workspace-related fields)\n+-- - Create sessions table (executor moves here)\n+-- - Update execution_processes.task_attempt_id -> session_id\n+-- - Rename executor_sessions -> coding_agent_turns (drop redundant task_attempt_id)\n+-- - Rename merges.task_attempt_id -> workspace_id\n+-- - Rename tasks.parent_task_attempt -> parent_workspace_id\n+\n+-- 1. Rename task_attempts to workspaces (FK refs auto-update in schema)\n+ALTER TABLE task_attempts RENAME TO workspaces;\n+\n+-- 2. Create sessions table\n+CREATE TABLE sessions (\n+ id BLOB PRIMARY KEY,\n+ workspace_id BLOB NOT NULL,\n+ executor TEXT,\n+ created_at TEXT NOT NULL DEFAULT (datetime('now', 'subsec')),\n+ updated_at TEXT NOT NULL DEFAULT (datetime('now', 'subsec')),\n+ FOREIGN KEY (workspace_id) REFERENCES workspaces(id) ON DELETE CASCADE\n+);\n+\n+CREATE INDEX idx_sessions_workspace_id ON sessions(workspace_id);\n+\n+-- 3. Migrate data: create one session per workspace (using workspace.id as session.id for simplicity)\n+INSERT INTO sessions (id, workspace_id, executor, created_at, updated_at)\n+SELECT id, id, executor, created_at, updated_at FROM workspaces;\n+\n+-- 4. Drop executor column from workspaces\n+ALTER TABLE workspaces DROP COLUMN executor;\n+\n+-- 5. Rename merges.task_attempt_id to workspace_id\n+DROP INDEX idx_merges_task_attempt_id;\n+DROP INDEX idx_merges_open_pr;\n+ALTER TABLE merges RENAME COLUMN task_attempt_id TO workspace_id;\n+CREATE INDEX idx_merges_workspace_id ON merges(workspace_id);\n+CREATE INDEX idx_merges_open_pr ON merges(workspace_id, pr_status)\n+WHERE merge_type = 'pr' AND pr_status = 'open';\n+\n+-- 6. Rename tasks.parent_task_attempt to parent_workspace_id\n+DROP INDEX IF EXISTS idx_tasks_parent_task_attempt;\n+ALTER TABLE tasks RENAME COLUMN parent_task_attempt TO parent_workspace_id;\n+CREATE INDEX idx_tasks_parent_workspace_id ON tasks(parent_workspace_id);\n+\n+-- Steps 7-8 need FK disabled to avoid cascade deletes during DROP TABLE\n+-- sqlx workaround: end auto-transaction to allow PRAGMA to take effect\n+-- https://github.com/launchbadge/sqlx/issues/2085#issuecomment-1499859906\n+COMMIT;\n+\n+PRAGMA foreign_keys = OFF;\n+\n+BEGIN TRANSACTION;\n+\n+-- 7. Update execution_processes to reference session_id instead of task_attempt_id\n+-- (needs rebuild because FK target changes from workspaces to sessions)\n+DROP INDEX IF EXISTS idx_execution_processes_task_attempt_created_at;\n+DROP INDEX IF EXISTS idx_execution_processes_task_attempt_type_created;\n+\n+CREATE TABLE execution_processes_new (\n+ id BLOB PRIMARY KEY,\n+ session_id BLOB NOT NULL,\n+ run_reason TEXT NOT NULL DEFAULT 'setupscript'\n+ CHECK (run_reason IN ('setupscript','codingagent','devserver','cleanupscript')),\n+ executor_action TEXT NOT NULL DEFAULT '{}',\n+ status TEXT NOT NULL DEFAULT 'running'\n+ CHECK (status IN ('running','completed','failed','killed')),\n+ exit_code INTEGER,\n+ dropped INTEGER NOT NULL DEFAULT 0,\n+ started_at TEXT NOT NULL DEFAULT (datetime('now', 'subsec')),\n+ completed_at TEXT,\n+ created_at TEXT NOT NULL DEFAULT (datetime('now', 'subsec')),\n+ updated_at TEXT NOT NULL DEFAULT (datetime('now', 'subsec')),\n+ FOREIGN KEY (session_id) REFERENCES sessions(id) ON DELETE CASCADE\n+);\n+\n+-- Since we used workspace.id as session.id, the task_attempt_id values map directly\n+INSERT INTO execution_processes_new (id, session_id, run_reason, executor_action, status, exit_code, dropped, started_at, completed_at, created_at, updated_at)\n+SELECT id, task_attempt_id, run_reason, executor_action, status, exit_code, dropped, started_at, completed_at, created_at, updated_at\n+FROM execution_processes;\n+\n+DROP TABLE execution_processes;\n+ALTER TABLE execution_processes_new RENAME TO execution_processes;\n+\n+-- Recreate execution_processes indexes\n+CREATE INDEX idx_execution_processes_session_id ON execution_processes(session_id);\n+CREATE INDEX idx_execution_processes_status ON execution_processes(status);\n+CREATE INDEX idx_execution_processes_run_reason ON execution_processes(run_reason);\n+\n+-- 8. Rename executor_sessions to coding_agent_turns and drop task_attempt_id\n+-- (needs rebuild to drop the redundant task_attempt_id column)\n+CREATE TABLE coding_agent_turns (\n+ id BLOB PRIMARY KEY,\n+ execution_process_id BLOB NOT NULL,\n+ session_id TEXT,\n+ prompt TEXT,\n+ summary TEXT,\n+ created_at TEXT NOT NULL DEFAULT (datetime('now', 'subsec')),\n+ updated_at TEXT NOT NULL DEFAULT (datetime('now', 'subsec')),\n+ FOREIGN KEY (execution_process_id) REFERENCES execution_processes(id) ON DELETE CASCADE\n+);\n+\n+INSERT INTO coding_agent_turns (id, execution_process_id, session_id, prompt, summary, created_at, updated_at)\n+SELECT id, execution_process_id, session_id, prompt, summary, created_at, updated_at\n+FROM executor_sessions;\n+\n+DROP TABLE executor_sessions;\n+\n+-- Recreate coding_agent_turns indexes\n+CREATE INDEX idx_coding_agent_turns_execution_process_id ON coding_agent_turns(execution_process_id);\n+CREATE INDEX idx_coding_agent_turns_session_id ON coding_agent_turns(session_id);\n+\n+-- 9. attempt_repos: no changes needed - FK auto-updated when task_attempts renamed to workspaces"
}
```
```gh-comment
{
"id": "2627694792",
"comment_type": "review",
"author": "ggordonhall",
"body": "Maybe there's a better name than `external_session_id` here? `agent_session_id`? ",
"created_at": "2025-12-17T16:16:24Z",
"url": "https://github.com/BloopAI/vibe-kanban/pull/1569#discussion_r2627694792",
"path": "crates/db/src/models/execution_process.rs",
"line": 685,
"diff_hunk": "@@ -618,4 +680,34 @@ impl ExecutionProcess {\n )),\n }\n }\n+\n+ /// Find latest coding_agent_turn session_id by workspace (across all sessions)\n+ pub async fn find_latest_external_session_id_by_workspace("
}
```
```gh-comment
{
"id": "2627707446",
"comment_type": "review",
"author": "ggordonhall",
"body": "```suggestion\r\n pub async fn cleanup_workspace(db: &DBService, workspace: &Workspace) {\r\n```",
"created_at": "2025-12-17T16:19:31Z",
"url": "https://github.com/BloopAI/vibe-kanban/pull/1569#discussion_r2627707446",
"path": "crates/local-deployment/src/container.rs",
"line": 146,
"diff_hunk": "@@ -142,20 +143,20 @@ impl LocalContainerService {\n map.remove(id)\n }\n \n- pub async fn cleanup_attempt_workspace(db: &DBService, attempt: &TaskAttempt) {\n- let Some(container_ref) = &attempt.container_ref else {\n+ pub async fn cleanup_workspace_container(db: &DBService, workspace: &Workspace) {"
}
```
```gh-comment
{
"id": "2627756192",
"comment_type": "review",
"author": "ggordonhall",
"body": "Update `mcp` nomenclature",
"created_at": "2025-12-17T16:31:49Z",
"url": "https://github.com/BloopAI/vibe-kanban/pull/1569#discussion_r2627756192",
"path": "crates/server/src/mcp/task_server.rs",
"line": 352,
"diff_hunk": "@@ -350,10 +349,9 @@ impl TaskServer {\n project_id: ctx.project.id,\n task_id: ctx.task.id,\n task_title: ctx.task.title,\n- attempt_id: ctx.task_attempt.id,\n- attempt_branch: ctx.task_attempt.branch,\n+ attempt_id: ctx.workspace.id,"
}
```
```gh-comment
{
"id": "2628161769",
"comment_type": "review",
"author": "ggordonhall",
"body": "update, and similar in other events",
"created_at": "2025-12-17T18:27:47Z",
"url": "https://github.com/BloopAI/vibe-kanban/pull/1569#discussion_r2628161769",
"path": "crates/server/src/routes/task_attempts.rs",
"line": 1335,
"diff_hunk": "@@ -1295,7 +1332,7 @@ pub async fn start_dev_server(\n serde_json::json!({\n \"task_id\": task.id.to_string(),\n \"project_id\": project.id.to_string(),\n- \"attempt_id\": task_attempt.id.to_string(),\n+ \"attempt_id\": workspace.id.to_string(),"
}
```
```gh-comment
{
"id": "2628194289",
"comment_type": "review",
"author": "ggordonhall",
"body": "Ugly, but we should rename this struct to avoid confusion with the more general concept of a workspace. Ideas...\r\n\r\n- `WorktreeContainer`\r\n...\r\n...\r\n\r\nChatGPT?",
"created_at": "2025-12-17T18:36:30Z",
"url": "https://github.com/BloopAI/vibe-kanban/pull/1569#discussion_r2628194289",
"path": "crates/services/src/services/workspace_manager.rs",
"line": 3,
"diff_hunk": "@@ -1,6 +1,6 @@\n use std::path::{Path, PathBuf};\n \n-use db::models::{repo::Repo, task_attempt::TaskAttempt};\n+use db::models::{repo::Repo, workspace::Workspace as DbWorkspace};"
}
```
```gh-comment
{
"id": "2628198036",
"comment_type": "review",
"author": "ggordonhall",
"body": "We could add a BE route for this, and similar hooks where we're aggregating this information on the fly",
"created_at": "2025-12-17T18:37:46Z",
"url": "https://github.com/BloopAI/vibe-kanban/pull/1569#discussion_r2628198036",
"path": "frontend/src/hooks/useTaskAttempts.ts",
"line": 43,
"diff_hunk": "@@ -16,10 +20,36 @@ export function useTaskAttempts(taskId?: string, opts?: Options) {\n const enabled = (opts?.enabled ?? true) && !!taskId;\n const refetchInterval = opts?.refetchInterval ?? 5000;\n \n- return useQuery<TaskAttempt[]>({\n+ return useQuery<Workspace[]>({\n queryKey: taskAttemptKeys.byTask(taskId),\n queryFn: () => attemptsApi.getAll(taskId!),\n enabled,\n refetchInterval,\n });\n }\n+\n+/**\n+ * Hook for components that need executor field for all attempts.\n+ * Fetches all attempts and their sessions in parallel.\n+ */\n+export function useTaskAttemptsWithSessions(taskId?: string, opts?: Options) {\n+ const enabled = (opts?.enabled ?? true) && !!taskId;\n+ const refetchInterval = opts?.refetchInterval ?? 5000;\n+\n+ return useQuery<WorkspaceWithSession[]>({\n+ queryKey: taskAttemptKeys.byTaskWithSessions(taskId),\n+ queryFn: async () => {\n+ const attempts = await attemptsApi.getAll(taskId!);\n+ // Fetch sessions for all attempts in parallel"
}
```
|
||
|
|
8a689ae4cb |
feat: trigger code review via !reviewfast PR comment (Vibe Kanban) (#1604)
* All tasks are complete. Here's a summary of the implementation: ## Implementation Complete I've implemented the `!reviewfast` comment trigger feature. Here's what was changed: ### Files Modified 1. **`crates/remote/src/db/reviews.rs`** (lines 253-279) - Added `has_pending_review_for_pr()` method to check if a review is already in progress for a given PR 2. **`crates/remote/src/routes/github_app.rs`** - Added import for `ReviewRepository` (line 21) - Added `TriggerReviewContext` and `PrMetadata` structs (lines 800-816) - Added `try_trigger_pr_review()` shared helper function (lines 818-928) - Refactored `handle_pull_request_event` to use the shared helper (lines 930-982) - Added `handle_issue_comment_event` handler (lines 984-1039) - Wired up `"issue_comment"` event in `handle_webhook` match (line 657) 3. **`crates/remote/.sqlx/query-7373b3a43a7dd6c5d77c13b5094bb01a63e2902a89dec683659644dd80eb6990.json`** - Added sqlx query cache for the new `has_pending_review_for_pr` query ### How It Works When a user comments exactly `!reviewfast` on a PR: 1. The `issue_comment` webhook is received 2. The handler validates: - It's a new comment (action = "created") - It's on a PR (not an issue) - The body is exactly `!reviewfast` - The commenter is not a bot 3. The shared `try_trigger_pr_review()` function then: - Checks the installation exists and isn't suspended - Checks the repository has reviews enabled - Checks no review is already pending for this PR - Checks R2 and review worker are configured - Fetches PR details from GitHub API - Spawns an async task to process the review ### Deployment Note The GitHub App must be configured to receive `issue_comment` webhook events in the GitHub App settings. * Cleanup script changes for task attempt fc1ea8e0-fa6d-435d-b569-a6bbbff3c4d2 * update review comment * Yes, you need to update your GitHub App to receive `issue_comment` webhook events. **Steps:** 1. Go to your GitHub App settings: `https://github.com/settings/apps/YOUR-APP-NAME` 2. Navigate to **Permissions & events** → **Subscribe to events** 3. Check the **Issue comments** checkbox 4. Save changes Without this, GitHub won't send the `issue_comment` webhook when someone comments `!reviewfast` on a PR. * Fixed. The tip is now always shown when the GitHub App is installed, regardless of whether repositories have finished loading. * Update copy |
||
|
|
adf759fbbe |
feat: add configurable auto-collapse for large diffs (#1587)
* perf: remove unecessary useEffects * feat: add settings for default diff type collapsing * feat: add setting for max line default collapse * refactor: use object for default collapse config storage * fix: use diff changes for max line count * refactor: use consistent ids as fallback for id-less diffs * chore: ran formatter * chore: revert configurability and use fe defaults for auto-collapse * Debug None additions/deletions for diff (vibe-kanban 72a2a541) crates/utils/src/diff.rs /api/task-attempts/.../diff/ws returns diffs with "additions": null", "deletions": null * use backend additions/deletions --------- Co-authored-by: Louis Knight-Webb <louis@bloop.ai> |
||
|
|
a282bbdae4 |
Real-time sync for Projects (#1512)
* Real-time sync for Projects * Do not create project in a transaction Update hooks trigger before the transaction is commited, which causes insert events to be dismissed because the row is isn't found |