fd210419cdcc149d6499c8ed508ef54d32c53ef1
230 Commits
| Author | SHA1 | Message | Date | |
|---|---|---|---|---|
|
|
fd210419cd | chore: bump version to 0.0.138 | ||
|
|
37af711712 | chore: bump version to 0.0.137 | ||
|
|
22ff27d8fd |
feat: multi repo projects (#1516)
* configure multiple repositories per project * Move repo selection to frontend for create PR (#1436) * Inline ensure worktree path (#1440) * Inline ensure_worktree_path in task_attempts * Inline ensure_worktree_path in images and setups, remove ensure_worktree_path * chore: use `find_repos_for_attempt` where possible (#1442) * use find_repos_for_attempt rather than find_repos_for_project where possible * remove Repo::find_by_attempt_id * change target branch backend (#1454) * feat: multi repo branch rename (#1456) * Rename branches * Update crates/services/src/services/git.rs Co-authored-by: Alex Netsch <alex@bloop.ai> --------- Co-authored-by: Alex Netsch <alex@bloop.ai> * Fix project display name being used as path (#1464) * Multi repo merge (#1470) * cleanup create PR * Merge for multi repo projects * Multi repo support for rebase (vibe-kanban) (#1472) * All tasks completed successfully. Here's a summary of the changes made: ## Summary Added multi-repo support for rebase, following the same pattern as merge (commit 483060e92). ### Files Modified **Backend:** 1. `crates/server/src/routes/task_attempts.rs`: - Added `repo_id: Uuid` field to `RebaseTaskAttemptRequest` struct (line 92) - Updated `rebase_task_attempt` function to: - Fetch specific `AttemptRepo` using `find_by_attempt_and_repo_id()` - Fetch the `Repo` using `Repo::find_by_id()` - Build `worktree_path` as `workspace_path.join(&repo.name)` - Use the specific repo's target branch - Update only that repo's target branch via `AttemptRepo::update_target_branch()` instead of `update_all_target_branches()` - Pass `worktree_path` to `rebase_branch()` instead of `workspace_path` **Frontend:** 2. `frontend/src/hooks/useRebase.ts`: - Added `repoId: string` to `RebaseMutationArgs` type - Pass `repo_id` in the request data 3. `frontend/src/components/tasks/Toolbar/GitOperations.tsx`: - Updated `handleRebaseWithNewBranchAndUpstream` to get `repoId` via `getSelectedRepoId()` and pass it to the rebase action **Auto-generated:** 4. `shared/types.ts` - Regenerated with `repo_id: string` in `RebaseTaskAttemptRequest` * All done. Both review comments have been addressed: 1. **Inlined `current_target_branch`** - Replaced the separate variable with `unwrap_or_else(|| attempt_repo.target_branch.clone())` calls directly 2. **Removed `update_all_target_branches`** - Deleted the unused function from `crates/db/src/models/attempt_repo.rs` since it's no longer needed after the multi-repo changes All checks pass. * Fix worktree name (#1483) * Add multi-repo support for rebase conflict handling (Vibe Kanban) (#1487) * All checks pass. Let me provide a summary of the changes made: I've added multi-repo support for the conflict handling (auto fix rebase) functionality. Here's what was changed: 1. Added `AbortConflictsRequest` struct with `repo_id: Uuid` field 2. Updated `abort_conflicts_task_attempt()` handler to: - Accept `repo_id` in the request body - Look up the specific repo by ID - Build the correct worktree path: `workspace_path.join(&repo.name)` - Pass the repo-specific worktree path to `abort_conflicts()` - Added `AbortConflictsRequest` to the list of exported types 1. **`frontend/src/lib/api.ts`**: - Updated `abortConflicts()` to accept `repoId` parameter and send it in the request body - Fixed `getBranchStatus()` return type from `BranchStatus[]` to `RepoBranchStatus[]` - Added `RepoBranchStatus` import, removed unused `BranchStatus` import 2. **`frontend/src/hooks/useAttemptConflicts.ts`**: - Added `repoId` parameter to the hook - Pass `repoId` to the API call 3. **`frontend/src/lib/conflicts.ts`**: - Added optional `repoName` parameter to `buildResolveConflictsInstructions()` - Added `repoName` parameter to `formatConflictHeader()` - Conflict resolution prompts now include repo context (e.g., "Rebase conflicts while rebasing 'branch' onto 'main' in repository 'my-repo'.") 4. **`frontend/src/components/tasks/follow-up/FollowUpConflictSection.tsx`**: - Updated `branchStatus` prop type from `BranchStatus[]` to `RepoBranchStatus[]` - Extract `repo_id` from first repo status and pass to `useAttemptConflicts()` 5. **`frontend/src/components/tasks/TaskFollowUpSection.tsx`**: - Pass `repo_name` to `buildResolveConflictsInstructions()` for multi-repo context in AI prompts Cleanup script changes for task attempt b833fb22-7a04-4c56-b82d-1afaa1074e78 * The type check passes. Now the `abortConflicts` API uses the generated `AbortConflictsRequest` type, which will catch any type mismatches if the backend type changes. * Done. Both components now find the first repo that actually has conflicts instead of just using the first repo in the array. The type check passes. * Done. Removed both comments as requested. * Multi-repo support for restoring commits (Vibe Kanban) (#1490) * I've successfully implemented multi-repo support for restoring commits. Here's a summary of the changes made: 1. **`crates/server/src/routes/execution_processes.rs`** - Added import for `ExecutionProcessRepoState` - Added new endpoint `get_execution_process_repo_states` that returns the per-repo commit states for an execution process - Added route `/repo-states` to the router 2. **`crates/server/src/bin/generate_types.rs`** - Added `ExecutionProcessRepoState::decl()` to export the type to TypeScript 3. **`frontend/src/lib/api.ts`** - Added import for `ExecutionProcessRepoState` - Added `getRepoStates` method to `executionProcessesApi` 4. **`frontend/src/components/dialogs/tasks/RestoreLogsDialog.tsx`** - Updated the `useEffect` hook to fetch repo states via the new API instead of trying to access `before_head_commit` directly from the execution process - Uses the first repo's `before_head_commit` for display (consistent with how merge handles multi-repo) 5. **`shared/types.ts`** - Auto-generated to include `ExecutionProcessRepoState` type The implementation follows the same pattern used for merge in commit 483060e92, where the first repo is selected for operations that need a single repo (like displaying commit information in the restore dialog). Cleanup script changes for task attempt fefd6bd6-25e3-4775-b6af-c11ad3c06715 Let me provide a summary of the changes made: I've implemented multi-repo support for the restore dialog. Here are the changes: 1. **`frontend/src/lib/api.ts`** - Changed `getBranchStatus` return type from `BranchStatus[]` to `RepoBranchStatus[]` (which includes `repo_id` and `repo_name` fields) - Removed unused `BranchStatus` import 2. **`frontend/src/hooks/useRetryProcess.ts`** - Changed `branchStatus` type in `RetryProcessParams` from `BranchStatus | undefined` to `RepoBranchStatus[] | undefined` - Updated caller to not pass `attemptId` (no longer needed) 3. **`frontend/src/components/NormalizedConversation/RetryEditorInline.tsx`** - Changed `branchStatus: branchStatus?.[0]` to `branchStatus: branchStatus` to pass the full array instead of just the first element 4. **`frontend/src/components/dialogs/tasks/RestoreLogsDialog.tsx`** - Changed `branchStatus` prop type from `BranchStatus | undefined` to `RepoBranchStatus[] | undefined` - Removed `attemptId` from props (no longer used) - Changed state from single `targetSha` to array `repoStates: ExecutionProcessRepoState[]` - Added `repoInfo` useMemo that joins repo states with branch status to get repo names - Added aggregated values: `anyDirty`, `totalUncommitted`, `totalUntracked`, `anyNeedsReset`, `repoCount` - Updated UI to display: - Title shows "(X repos)" when multiple repos - Lists each repo with its name and target commit SHA - Aggregated uncommitted/untracked counts across all repos - For single-repo projects: Works exactly as before, shows one commit SHA - For multi-repo projects: Shows all repos being restored with their respective target commits, e.g.: ``` Reset worktree (2 repos) Your worktrees will be restored to these commits. 🔀 repo-a: abc1234 🔀 repo-b: def5678 ``` - Uncommitted changes warnings aggregate across all repos Revert "Let me provide a summary of the changes made:" This reverts commit 14eea8717b422bdd913c9ef54def44e6297d2efe. * I've completed all the changes. The rebase conflict is resolved and the multi-repo support is implemented. Here's a summary: - Changed `getBranchStatus` return type from `BranchStatus[]` to `RepoBranchStatus[]` - Changed import from `BranchStatus` to `RepoBranchStatus` - Changed `branchStatus` in `RetryProcessParams` from `BranchStatus | undefined` to `RepoBranchStatus[] | undefined` - Removed `attemptId` from the dialog call - Changed `branchStatus: branchStatus?.[0]` to `branchStatus: branchStatus` to pass the full array - Changed `branchStatus` prop from `BranchStatus | undefined` to `RepoBranchStatus[] | undefined` - Removed `attemptId` from props (no longer needed) - Changed state from single `targetSha` to array `repoStates: ExecutionProcessRepoState[]` - Added `repoInfo` useMemo to join repo states with branch status by `repo_id` - Added aggregated values: `anyDirty`, `totalUncommitted`, `totalUntracked`, `repoCount` - Updated UI to show all repos with their target commits when `repoCount > 1` The TypeScript check couldn't run in this worktree due to the pnpm workspace setup, but the implementation follows the same pattern that was working before. The main change is that the dialog now shows all repos being restored instead of just the first one. * The frontend type check passes. All rebase conflicts have been resolved: 1. **`frontend/src/lib/api.ts`** - Resolved: Kept `AbortConflictsRequest` import and typed `abortConflicts` function; removed unused `BranchStatus` import 2. **`frontend/src/hooks/useAttemptConflicts.ts`** - Resolved: Uses `{ repo_id: repoId }` object parameter 3. **`frontend/src/components/tasks/TaskFollowUpSection.tsx`** - Resolved: Removed comment conflict marker, kept HEAD code 4. **`frontend/src/components/tasks/follow-up/FollowUpConflictSection.tsx`** - Resolved: Removed comment conflict marker, kept HEAD code The rebase completed successfully and the frontend type check passes. * All done! Here's a summary of the changes made to add multi-repo support for push and force push: (#1491) 1. **`crates/server/src/routes/task_attempts.rs`**: - Added `PushTaskAttemptRequest` struct with `repo_id: Uuid` field - Updated `push_task_attempt_branch` to accept `Json<PushTaskAttemptRequest>` and use the repo's worktree path - Updated `force_push_task_attempt_branch` to accept `Json<PushTaskAttemptRequest>` and use the repo's worktree path 2. **`crates/server/src/bin/generate_types.rs`**: - Added `PushTaskAttemptRequest::decl()` to export the type to TypeScript 3. **`frontend/src/lib/api.ts`**: - Updated `push()` and `forcePush()` to accept `PushTaskAttemptRequest` with `repo_id` 4. **`frontend/src/hooks/usePush.ts`**: - Added `PushParams` type with `repoId: string` - Updated mutation to accept params and pass `repo_id` to API - Added params to error callback for use in ForcePushDialog 5. **`frontend/src/hooks/useForcePush.ts`**: - Added `ForcePushParams` type with `repoId: string` - Updated mutation to accept params and pass `repo_id` to API 6. **`frontend/src/components/dialogs/git/ForcePushDialog.tsx`**: - Added `repoId` to props - Passes `repoId` to `forcePush.mutateAsync()` 7. **`frontend/src/hooks/useGitOperations.ts`**: - Updated push error handler to pass `repoId` to `ForcePushDialog.show()` 8. **`frontend/src/components/tasks/Toolbar/GitOperations.tsx`**: - Updated `handlePushClick` to get `repoId` and pass it to `git.actions.push()` Cleanup script changes for task attempt 7fea7193-f072-4b7d-8650-e85ef88f0e9f * Add multi-repo support for PR comments (Vibe Kanban) (#1500) * All tasks are complete. Here's a summary of the changes made: **`crates/server/src/routes/task_attempts/pr.rs`:** - Added import for `Query` from axum - Added new type `GetPrCommentsQuery { repo_id: Uuid }` - Modified `get_pr_comments` function to: - Accept `Query(query): Query<GetPrCommentsQuery>` parameter - Use `AttemptRepo::find_by_attempt_and_repo_id()` to look up the specific repo - Use `Repo::find_by_id()` to get the repo path - Use `repo.path` instead of `get_first_repo_path()` **`crates/server/src/bin/generate_types.rs`:** - Added `GetPrCommentsQuery::decl()` to export the type to TypeScript **`frontend/src/lib/api.ts`:** - Updated `getPrComments` to accept `repoId: string` parameter - Added query string `?repo_id=${encodeURIComponent(repoId)}` **`frontend/src/hooks/usePrComments.ts`:** - Added `repoId` parameter to hook and query key - Updated enabled condition to require both `attemptId` and `repoId` **`frontend/src/components/dialogs/tasks/GitHubCommentsDialog.tsx`:** - Added `repoId: string` to `GitHubCommentsDialogProps` - Pass `repoId` to `usePrComments` hook **`frontend/src/components/tasks/TaskFollowUpSection.tsx`:** - Extract `repoId` from `branchStatus?.[0]?.repo_id` - Pass `repoId` to `GitHubCommentsDialog.show()` - Added `branchStatus` to useCallback dependencies Cleanup script changes for task attempt 1fc865dd-4297-4eb6-9265-09f102b92252 * Extend merges to handel repo ID (vibe-kanban 3dd2880f) Incrates/db/src/models/merge.rs make sure mergerow and related structs have a repo\_id field to support multi repo projects. This is needed to support e.g. fetching PR comments correctly, see last few commits * The backend compiles successfully. The fix is complete. ## Summary Fixed the review comment by changing `get_pr_comments` to use `Merge::find_by_task_attempt_and_repo_id()` instead of `Merge::find_latest_by_task_attempt_id()`. This ensures the PR comments are fetched for the specific repo, not just the latest merge across all repos. * All done. Here's a summary of the changes: 1. **`crates/server/src/routes/task_attempts.rs`** - Updated `rename_branch` to check ALL repos for open PRs using `find_by_task_attempt_id` instead of just the latest merge 2. **`crates/db/src/models/merge.rs`** - Removed the `find_latest_by_task_attempt_id` function entirely * All done. The frontend now uses `useAttemptRepo` hook with `repos[0]?.id` for repo selection, matching the pattern used by merge, rebase, push, and other git operations in `GitOperations.tsx`. * Done. The changes: 1. Removed `firstRepoStatus = branchStatus?.[0]` (line 82) 2. Updated the merged PR check to iterate over ALL repos using `branchStatus?.some()` 3. Updated the useMemo dependency from `firstRepoStatus?.merges` to `branchStatus` * Done. Added the `getSelectedRepoId` helper function to `TaskFollowUpSection.tsx`, matching the pattern used in `GitOperations.tsx`. This makes it easier to add repo selection in the future - when that feature is added, only the helper function needs to be updated. * Done. The lint errors are fixed. Both the frontend and backend linting pass now. The `getSelectedRepoId` helper function is now wrapped in `useCallback` with `[repos]` as its dependency, matching the pattern used in `GitOperations.tsx`. * feat: override `ProjectRepos` (#1475) * override repos at a task_attempt level * simplify mcp * refer to repoIds on frontend; separate api call to register a repo obj * consolidate ProjectFormDialog * Move repo path migration to rust backend (#1507) * Add repo selection to git toolbar and fix project create loop (Vibe Kanban) (#1505) * Conflicts are resolved in both files. I’m going to `git add` them and continue the rebase using a non‑interactive editor (`GIT_EDITOR=true`) so it won’t hang. * **Rebase Complete** - Resolved conflicts in `frontend/src/hooks/useAttemptRepo.ts` by keeping the new repo‑selection API (`repos`, `selectedRepoId`, `setSelectedRepoId`) and typing the query as `RepoWithTargetBranch[]`. - Resolved conflicts in `frontend/src/lib/api.ts` by keeping both `RepoWithTargetBranch` and `RepositoryBranches` imports (no conflict markers left). - Continued the rebase non‑interactively with `GIT_EDITOR=true`; rebase finished successfully. - Ran `pnpm -C frontend run check`; fixed a post‑rebase TS error by re‑adding the missing `GitBranch` import used by `repoApi.getBranches` in `frontend/src/lib/api.ts`. - Typecheck now passes and `git status` is clean. If you want, I can run `pnpm -C frontend run lint` or a workspace test sweep next. * Done. I made three small UI tweaks to improve readability. **What Changed** - Removed the dropdown caret/arrow in the repo selector button: - `frontend/src/components/tasks/RepoSelector.tsx` - Dropped the `ArrowDown` icon and its import so the selector is just folder icon + repo name. - Removed the arrow separator that came right after the repo chip in the toolbar: - `frontend/src/components/tasks/Toolbar/GitOperations.tsx` - Deleted the first `ArrowRight` that was between the repo chip and task-branch chip. - Replaced the gear icon with a clearer “switch” icon, and added spacing before branches: - `frontend/src/components/tasks/Toolbar/GitOperations.tsx` - Swapped `Settings` → `ArrowLeftRight` for both repo-change and target-branch-change buttons. - Added `ml-2` to the task branch chip so there’s visible separation from the repo selector area. **Checks** - `pnpm -C frontend run check` passes. If you want a different icon (e.g., `ChevronsUpDown`, `Pencil`, `Repeat`), tell me which lucide icon you prefer and I’ll swap it. * Cleanup script changes for task attempt 24ce3817-354c-48f7-a66c-3fb9e6fd019a * That behavior lines up with a create‑loop in the project creation dialog. I found a likely culprit and patched it. **Root Cause** - `frontend/src/components/dialogs/projects/ProjectFormDialog.tsx` auto‑runs `handlePickRepo()` in a `useEffect` whenever `modal.visible` is true. - After the multi‑repo changes, the `createProject` mutation object was causing `handlePickRepo` to be re‑created on rerenders while the modal stayed visible. - Effect dependency `[modal.visible, handlePickRepo]` then retriggered repeatedly, spamming `createProject.mutate(...)` and creating tons of projects. **Fix** - Added a `hasStartedCreateRef` guard so we only call `handlePickRepo()` once per dialog open. - Reset the guard when the modal closes. - Narrowed the `handlePickRepo` deps to `createProject.mutate` (stable) instead of the whole mutation object. Patch is in `frontend/src/components/dialogs/projects/ProjectFormDialog.tsx`. **Status** - `pnpm -C frontend run check` passes. **What you should do now** - Try creating a project again; you should see exactly one POST. - If you want to clean up the 1500 accidental projects, tell me whether they’re local or remote and I can help script a safe bulk delete (with a dry‑run list first). * Adjusted to match what you described. **UI Tweaks** - Kept the right‑side icon (target branch change) as the original gear: - In `frontend/src/components/tasks/Toolbar/GitOperations.tsx`, the target‑branch change button now uses `Settings` again. - Changed only the icon after the project/repo chip to something more “selectable”: - Swapped that button to `ChevronsUpDown`, which reads more like a picker/change control. - Increased spacing between the project/repo area and the task/branch chips: - Bumped the task branch chip margin from `ml-2` → `ml-3`. **Checks** - `pnpm -C frontend run check` passes. If the spacing still feels tight on your screen size, tell me roughly how much more (e.g., “double it” / “a bit more”) and I’ll nudge to `ml-4` or add a `mr-*` on the repo group. * Cleanup script changes for task attempt 24ce3817-354c-48f7-a66c-3fb9e6fd019a * Ran lint and fixed the warning. **Lint Fix** - `pnpm -C frontend run lint` failed with a `react-hooks/exhaustive-deps` warning in `frontend/src/components/dialogs/projects/ProjectFormDialog.tsx:58`. - Cause: the `useCallback` referenced `createProject.mutate` but ESLint wanted the full `createProject` object as a dep. - Fix: pulled out a stable `createProjectMutate` variable and used that inside the callback/deps. Patch is in `frontend/src/components/dialogs/projects/ProjectFormDialog.tsx`. **Status** - `pnpm -C frontend run lint` now passes with zero warnings. * Done — task/branch group is now right‑aligned in horizontal layout. **Change** - In `frontend/src/components/tasks/Toolbar/GitOperations.tsx`, I split the left row into: - Left: repo chip + change button. - Right (horizontal only): task branch chip → arrow → target branch chip + gear. - Implemented by wrapping the branch chips in a container that uses `ml-auto` when `layout !== 'vertical'`. Vertical layout keeps the old left‑flow behavior. **Result** - Maximizes spacing between project/repo and task/branch selectors without changing their contents. - Icons remain: repo change uses `ChevronsUpDown`, target branch change keeps `Settings`. **Checks** - `pnpm -C frontend run check` passes. - `pnpm -C frontend run lint` passes. If you want the right group to wrap sooner/never wrap, I can tune the flex/whitespace classes. * Cleanup script changes for task attempt 24ce3817-354c-48f7-a66c-3fb9e6fd019a * Fixed — right‑alignment should now actually take effect. **What I changed** - In `frontend/src/components/tasks/Toolbar/GitOperations.tsx`, updated the horizontal grid template to give the left column flexible width: - `grid-cols-[auto_minmax(0,1fr)_auto]` - → `grid-cols-[minmax(0,1fr)_auto_auto]` - With that, the left “branch flow” container stretches, so the inner branch group’s `ml-auto` can push it to the right. **Checks** - `pnpm -C frontend run check` passes. - `pnpm -C frontend run lint` passes. Take another look in the UI; you should see the repo chip left and the branch chips snapped to the right with maximum space between. If still tight on narrow widths, I can add a wrap breakpoint so branches drop to a second line sooner. * Got it. I’ll make the branch group right‑align only in horizontal layout *and* when `repos.length > 1`. For a single repo, it’ll stay left‑flow with a small left margin. I’ll patch `frontend/src/components/tasks/Toolbar/GitOperations.tsx` accordingly and re-run check/lint. * Done. **Behavior now** - Horizontal layout: - If `repos.length > 1`: branch group uses `ml-auto` → right‑aligned. - If `repos.length <= 1`: branch group uses `ml-3` → stays left‑flow with a bit of spacing. - Vertical layout unchanged. Change is in `frontend/src/components/tasks/Toolbar/GitOperations.tsx`. **Checks** - `pnpm -C frontend run check` passes. - `pnpm -C frontend run lint` passes. * The `gh pr edit` call succeeded, but my body quoting was unsafe in bash (backticks triggered command substitution). I’m going to re‑apply the title/body using a `--body-file` heredoc so it’s exact. Proposed final content: **Title** Add repo selection to git toolbar and fix project create loop (Vibe Kanban) **Description** This PR completes the frontend side of multi‑repo support by letting users choose which repository an attempt operates on, and by ensuring branch/status UI uses that selection. It also fixes a regression where the create‑project dialog could fire repeatedly. - Added repo selection UX for git operations: - New `RepoSelectionContext` to share the selected repo across git‑related components. - New `RepoSelector` dropdown and `ChangeRepoDialog` modal for choosing a repo when an attempt has multiple repositories. - Updated `GitOperations`, `GitActionsDialog`, `TaskFollowUpSection`, and `ProjectTasks` to read/write the selected repo via the context and `useAttemptRepo`. - Disabled/enabled git buttons based on the selected repo’s branch status instead of aggregated status. - Made project branches multi‑repo aware: - `projectsApi.getBranches` now returns `RepositoryBranches[]` (per‑repo branches) instead of flattening into a single list. - `useProjectBranches` and `ProjectTasks` now select the branch list for the currently selected repo. - UI polish for clarity: - Simplified the repo picker chip, removed extra caret, and replaced the repo‑change icon with a clearer selector icon. - Tuned spacing and conditional right‑alignment of branch chips only when multiple repos exist. - Fixed create‑project spam regression: - Guarded the auto‑launching repo picker in `ProjectFormDialog` so it runs once per modal open, preventing repeated `createProject` mutations. - Adjusted hook deps to satisfy `eslint-plugin-react-hooks`. Multi‑repo attempts require users to choose which repo to rebase/merge/push against, and the UI must reflect branch/status information for that specific repo. Without a shared selection, components defaulted to the first repo or mixed aggregated data, causing incorrect button states and confusing branch display. The project‑creation guard addresses a rerender‑triggered loop introduced during this work. - Selection defaults to the first repo, but persists across git dialogs through the context. - The toolbar only shows repo selection controls when `repos.length > 1`. - Branch/status lookups use `repo_id` matching to keep all operations scoped correctly. This PR was written using [Vibe Kanban](https://vibekanban.com). I’ll update the PR now with a safe heredoc. * I switched the repo/project picker in the Git toolbar to an inline dropdown (no modal). **What I changed** - `frontend/src/components/tasks/Toolbar/GitOperations.tsx` - Removed the `ChangeRepoDialog` flow and its handler. - Rendered `RepoSelector` directly in the toolbar, so clicking the repo chip opens a dropdown. - Kept it visible for any attempt with repos, but the dropdown only matters when multiple repos exist; branch chips still right‑align only when `repos.length > 1`. - Disabled the selector while an attempt is running to match the old “can’t change mid‑run” behavior. - `frontend/src/components/tasks/RepoSelector.tsx` - Added optional `disabled` prop and wired it to the trigger button. - Cleaned up unused dialog: - Removed export in `frontend/src/components/dialogs/index.ts`. - Deleted `frontend/src/components/dialogs/tasks/ChangeRepoDialog.tsx`. **Checks** - `pnpm -C frontend run check` passes. - `pnpm -C frontend run lint` passes. **Note** You currently have uncommitted changes from this tweak (`git status` shows 4 modified/deleted files). I didn’t commit because you haven’t asked me to; once you’re happy, you can `git add`/`git commit`/`git push` and the PR will update. If you want me to also refactor `useAttemptRepo` into a shared selection (so picking a repo in the toolbar updates other panels), say so and I’ll tackle that next. * Done — both tweaks are in. **Repo selector icon** - `frontend/src/components/tasks/RepoSelector.tsx` - Added the `ChevronsUpDown` indicator back. - It only renders when `repos.length > 1`, so single‑repo attempts just show the repo chip without a dropdown cue. **Centered branch + status layout** - `frontend/src/components/tasks/Toolbar/GitOperations.tsx` - Reworked the horizontal layout to match your sketch: - Left: repo selector. - Center (flex‑1, `justify-center`): `[task branch] → [target branch]` followed immediately by git status chips. - Right: git action buttons. - Vertical layout is unchanged. - Extracted `branchChips` and `statusChips` into reusable nodes so the markup stays consistent. **Checks** - `pnpm -C frontend run check` passes. - `pnpm -C frontend run lint` passes. Give it a quick look in the UI; spacing/centering should now line up with `[repo] [task branch] [target branch] [git status] [git actions]`. * Cleanup script changes for task attempt 24ce3817-354c-48f7-a66c-3fb9e6fd019a * Fix branch selector repo state (vibe-kanban 0f20535c) When selecting a repo, the branch selector needs to update accordingly and only show branches available on the selected repo. Check the latest commits for relevant files and changes. * Fix branch selector repo state (vibe-kanban 0f20535c) When selecting a repo, the branch selector needs to update accordingly and only show branches available on the selected repo. Check the latest commits for relevant files and changes. * multi repo target selector (#1513) * feat: per repo git state (#1514) * multi repo target selector * per-repo pr state * query per iter * Move per-repo scripts from Project to ProjectRepo (Vibe Kanban) (#1515) * The implementation is complete. Here's a summary of all the changes made to move script fields from `Project` to `ProjectRepo`: - Created `crates/db/migrations/20251212111429_move_scripts_to_project_repos.sql` - Migrates script columns from `projects` to `project_repos` table and removes them from `projects` - **`crates/db/src/models/project_repo.rs`** - Added script fields (`setup_script`, `dev_script`, `cleanup_script`, `copy_files`, `parallel_setup_script`), `UpdateProjectRepo` struct, and `update()` method - **`crates/db/src/models/project.rs`** - Removed all script fields from `Project`, `CreateProject`, and `UpdateProject` - **`crates/services/src/services/container.rs`** - Updated `start_attempt()` to get scripts from project_repos and run them in parallel - **`crates/local-deployment/src/container.rs`** - Updated copy_files execution to be per-repo - **`crates/server/src/routes/projects.rs`** - Added `GET/PUT/DELETE` handlers for `/projects/{id}/repositories/{repo_id}` - **`crates/server/src/routes/task_attempts.rs`** - Updated setup/cleanup/dev script handlers to use per-repo scripts - **`crates/server/src/bin/generate_types.rs`** - Added `ProjectRepo` and `UpdateProjectRepo` type exports - **`frontend/src/lib/api.ts`** - Added `getRepository()` and `updateRepository()` methods - **`frontend/src/pages/settings/ProjectSettings.tsx`** - Completely updated to: - Separate project name editing from scripts editing - Add repo dropdown in Scripts card - Save scripts per-repository with dedicated save button - **`frontend/src/components/tasks/TaskDetails/preview/NoServerContent.tsx`** - Updated to save dev scripts to first repository - **`frontend/src/components/NormalizedConversation/NextActionCard.tsx`** - Updated to always show dev server button - **`frontend/src/components/panels/PreviewPanel.tsx`** - Updated to always show dev server button - **`frontend/src/components/tasks/TaskFollowUpSection.tsx`** - Updated to always allow running scripts - **`frontend/src/components/dialogs/projects/ProjectFormDialog.tsx`** - Removed script fields from create project * Cleanup script changes for task attempt ab6892e9-754c-4033-99e0-1bf13ea64df7 * Done! The fix is complete. The `cleanup_actions_for_repos` method now chains each cleanup script as a separate `ExecutorAction` using the `append_action()` method instead of combining them into one bash script with `join("\n\n")`. This ensures each script runs as its own distinct action in the execution chain. * Cleanup script changes for task attempt ab6892e9-754c-4033-99e0-1bf13ea64df7 * Done! The migrations have been squashed: 1. Deleted `20251212111429_move_scripts_to_project_repos.sql` 2. Modified `20251209000000_add_project_repositories.sql` to: - Add script columns (`setup_script`, `dev_script`, `cleanup_script`, `copy_files`, `parallel_setup_script`) to `project_repos` table creation - Copy scripts from `projects` to `project_repos` during the migration INSERT - Remove script columns from `projects_new` table creation 3. Regenerated SQLx cache 4. Verified compilation passes * Done! The `dev_script` has been successfully moved from `ProjectRepo` back to `Project`. All compilation passes. Here's a summary of the changes: **Migration** (`20251209000000_add_project_repositories.sql`): - `dev_script` stays on `projects` table (not moved to `project_repos`) **Backend changes:** - `Project` struct: Added `dev_script: Option<String>` field - `UpdateProject`: Made fields optional with `#[ts(optional)]` for TypeScript - `ProjectRepo`/`ProjectRepoWithName`/`UpdateProjectRepo`: Removed `dev_script` field - `start_dev_server`: Now uses `project.dev_script` instead of `first_project_repo.dev_script` **Frontend changes:** - `ProjectSettings.tsx`: Dev script is now in the project-level form, not per-repo scripts - `NoServerContent.tsx`: Saves dev script to project via `projectsApi.update()` * Done! Both review comments have been addressed: 1. **Migration file** - Removed the note comment `-- Note: dev_script stays on projects table (project-level, not per-repo)` 2. **container.rs** - Removed the confusing/incorrect comment. The old comment said "For simplicity, we still run these in parallel but they complete before next_action" which was misleading. Changed the comment to simply say "Sequential setup script - starts before coding agent" which accurately describes what happens (it starts before the coding agent is started, but doesn't block it). * Done! The fix is complete. Now sequential setup scripts properly use `next_action` chaining: **How it works now:** 1. **Parallel setup scripts** (`parallel_setup_script = true`): Started immediately with `next_action: None`, run independently alongside the coding agent 2. **Sequential setup scripts** (`parallel_setup_script = false`): Collected and chained via `next_action`: - Chain: `setup1 → setup2 → ... → coding_agent → cleanup` - Only one `start_execution` call with the first setup as root - Each step triggers the next via `try_start_next_action()` when it completes * Done! The refactoring is complete. Here's a summary of the changes: **Changes made:** 1. **`cleanup_actions_for_repos`** - Updated signature to take `&[ProjectRepoWithName]` instead of `&[(String, String)]`. Filters repos with cleanup scripts internally. 2. **`setup_action_for_repo`** - New helper method that builds a setup script action for a single repo (used for parallel execution). 3. **`build_sequential_setup_chain`** - New helper method that chains sequential setup scripts → coding agent via `next_action`. 4. **`start_attempt`** - Refactored to use the new helpers: - If ALL repos have `parallel_setup_script = true`: Start each setup independently, then start coding agent - If ANY repo has `parallel_setup_script = false`: Chain ALL setups → coding agent via `next_action` 5. **Updated call sites** in `local-deployment/container.rs` and `server/task_attempts.rs` to pass `&project_repos` directly instead of extracting tuples. * Done! The refactoring is complete. Here's a summary of the changes: **Changes made:** 1. **`cleanup_actions_for_repos`** - Changed return type from `Option<Box<ExecutorAction>>` to `Option<ExecutorAction>`. Callers now box when needed. 2. **`setup_actions_for_repos`** - New helper method that mirrors `cleanup_actions_for_repos` but for setup scripts with `ScriptContext::SetupScript`. 3. **Updated all call sites** to use `.map(Box::new)` when the action is used as `next_action`: - `start_attempt` in `container.rs` - `run_with_message` in `local-deployment/container.rs` - `run_with_message` in `task_attempts.rs` 4. **Refactored handlers** to use the new helpers instead of manually building action chains: - `run_setup_script` handler - now uses `setup_actions_for_repos` - `run_cleanup_script` handler - now uses `cleanup_actions_for_repos` This removes all the loosely-typed `Vec<(String, String)>` tuples and `(first_repo, first_script)` patterns in favor of using `ProjectRepoWithName` directly. * Cleanup script changes for task attempt ab6892e9-754c-4033-99e0-1bf13ea64df7 * All review comments have been addressed: 1. **NextActionCard.tsx** - Reverted `projectHasDevScript` to `Boolean(project?.dev_script)` 2. **PreviewPanel.tsx** - Reverted `projectHasDevScript` to `Boolean(project?.dev_script)` 3. **NoServerContent.tsx** - Removed the comment about dev_script being saved at project level 4. **TaskFollowUpSection.tsx** - Removed the comments and inlined `hasSetupScript`/`hasCleanupScript` to just `true`, simplified the dropdown menu items by removing the disabled states and tooltips * All 27 review comments have been addressed: **Summary of changes:** 1. **Removed `#[ts(export)]` and `#[ts(optional)]` annotations** from `UpdateProject` struct in `project.rs` 2. **Removed `#[ts(export)]`** from `ProjectRepo` struct in `project_repo.rs` 3. **Removed unnecessary comments** from: - `project.rs` - "First get the existing project to use as defaults" - `project_repo.rs` - doc comment about finding project repos, "First check if the project_repo exists", "Use existing values as defaults" - `local-deployment/container.rs` - 3 comments about getting project repos - `task_attempts.rs` - 4 comments about getting project repos, dev server, setup/cleanup scripts - `services/container.rs` - All doc comments on helper methods, intermediate comments 4. **Refactored `NoServerContent.tsx`** to use `useProjectMutations` hook instead of direct `queryClient` usage: - Replaced `useQueryClient()` with `useProjectMutations()` - Updated `handleSaveDevScript` to use `updateProject.mutate()` - Replaced `isSaving` state with `updateProject.isPending` 5. **Fixed TypeScript type compatibility** after removing `#[ts(optional)]`: - `NoServerContent.tsx` - Pass `name: null` when only updating `dev_script` - `ProjectSettings.tsx` - Use `|| null` instead of `|| undefined` * Done! The fix is complete. Now when you clear the `copy_files` field (or `setup_script` or `cleanup_script`) and save, it will properly clear to `null` instead of preserving the existing value. * Done! The implementation is complete: 1. **Added `find_by_repo_id` method** to `crates/db/src/models/project_repo.rs` - a reusable method to find all project_repos by repo_id 2. **Updated `backfill_repo_names`** in `crates/services/src/services/container.rs` to also prepend `cd ./repo_name &&` to dev_script for single-repo projects when backfilling repo names The backfill: - Uses the existing `__NEEDS_BACKFILL__` sentinel to trigger - Only updates single-repo projects (checks `all_repos.len() == 1`) - Transforms `dev_script` from `"npm run dev"` to `"cd ./repo_name && npm run dev"` - Works cross-platform (both bash and cmd.exe support `&&`) - Uses existing `Project::update` method rather than raw SQL * Cleanup script changes for task attempt ab6892e9-754c-4033-99e0-1bf13ea64df7 * Compiles cleanly. Fixed - now empty scripts are ignored and won't result in just `cd ./repo_name && ` being set. * Compiles cleanly. Fixed - now using proper let chains for a cleaner flow. * All 3 review comments have been addressed: 1. **Deleted `crates/db/bindings/` directory** - removed the generated ts-rs files that shouldn't be committed 2. **Used `#[sqlx(flatten)]` for `ProjectRepoWithName`** - refactored the struct to use flatten, which required: - Updating the struct definition - Updating all field accesses to use `project_repo.field` instead of `field` directly - Changing the query from `query_as!` macro to `query_as::<_, T>()` to work with flatten 3. **Added proper import for `ProjectRepoError`** - imported at the top of `error.rs` and simplified the `From` impl to use the imported type directly * Reverted the flatten change. The `ProjectRepoWithName` struct is back to having all fields directly, and the query uses the type-checked `query_as!` macro. * Cleanup script changes for task attempt ab6892e9-754c-4033-99e0-1bf13ea64df7 * Hide Open in IDE button for multi-repo projects (Vibe Kanban) (#1517) * All checks pass. Let me summarize the changes: ## Summary I've fixed the "Open project in IDE" button to: - **Show and work for single-repo projects** - **Hide for multi-repo projects** ### Changes made: 1. **`frontend/src/components/layout/Navbar.tsx`**: - Added `useQuery` import and `projectsApi` import - Added a query to fetch repositories when `projectId` is set - Added `isSingleRepoProject` check (`repos?.length === 1`) - Wrapped `OpenInIdeButton` in conditional rendering: only shows when `isSingleRepoProject` is true 2. **`frontend/src/components/projects/ProjectCard.tsx`**: - Added `useQuery` import - Added a query to fetch repositories for each project card - Added `isSingleRepoProject` check - Wrapped the "Open in IDE" dropdown menu item in conditional rendering: only shows when `isSingleRepoProject` is true * The changes are complete. Here's a summary: ## Summary Updated both files to use the existing `useBranches` hook instead of inline `useQuery` calls: ### `frontend/src/components/layout/Navbar.tsx` - Replaced `useQuery` import with `useBranches` hook import - Removed `projectsApi` import (no longer needed) - Replaced inline query with `useBranches(projectId)` - Button still only shows when `isSingleRepoProject` is true ### `frontend/src/components/projects/ProjectCard.tsx` - Removed `useQuery` import - Added `useBranches` hook import - Replaced inline query with `useBranches(project.id)` - Dropdown item still only shows when `isSingleRepoProject` is true * Done! I've successfully removed the unused `compare_commit_to_head` endpoint. Here's a summary of the changes: (#1520) ## Changes Made 1. **`crates/server/src/routes/task_attempts.rs`** - Removed `CommitCompareResult` struct - Removed `compare_commit_to_head` handler function - Removed route registration for `/commit-compare` 2. **`crates/server/src/bin/generate_types.rs`** - Removed `CommitCompareResult::decl()` from type exports 3. **`frontend/src/lib/api.ts`** - Removed `CommitCompareResult` import - Removed the entire `commitsApi` object 4. **`shared/types.ts`** (auto-generated) - `CommitCompareResult` type was automatically removed after running `pnpm run generate-types` All verification checks pass (`pnpm run check` including both frontend TypeScript and Rust cargo check). * Fix WebSocket events lost when creating execution processes (Vibe Kanban) (#1518) * The fix is complete. Here's a summary: **File**: `crates/db/src/models/execution_process.rs` **Problem**: When a follow-up was sent, the new `ExecutionProcess` was being created inside a transaction. The SQLite update hook would fire immediately after the INSERT (but before the transaction committed), spawning an async task that queried `find_by_rowid` on a **different database connection**. Since the transaction hadn't committed yet, this query couldn't see the row, causing it to return `None` and the WebSocket event to be lost. **Solution**: Split the operation so that the `execution_process` INSERT is committed immediately (without a transaction wrapper), making it visible to the update hook's async task. The `execution_process_repo_states` are then inserted in a separate transaction afterward. **Key changes**: 1. Removed the outer transaction that wrapped both inserts 2. Insert `execution_process` directly using `pool.execute()` - this auto-commits 3. Insert `execution_process_repo_states` in a separate transaction (only if there are any) 4. Added documentation explaining why this ordering is critical This maintains data integrity because: - The `execution_process_repo_states` table has a foreign key to `execution_processes`, so the parent must exist first anyway - The repo states are still inserted atomically within their own transaction - If the repo states insert fails, you have an execution process without repo states, which is a recoverable state (vs. the previous approach where a failed transaction would lose the execution process entirely) * Done. Here's the summary of the fix: **Changed files:** 1. `crates/db/src/models/execution_process.rs` - Removed transaction, added doc comment explaining why 2. `crates/db/src/models/execution_process_repo_state.rs` - Changed `create_many` to take `&SqlitePool` directly instead of a transaction **The fix:** - Removed the transaction wrapper around the execution_process + repo_states inserts - Each INSERT now auto-commits immediately, so when the SQLite update hook fires and spawns an async task to query `find_by_rowid`, the row is already visible - Added a doc comment explaining why transactions must be avoided here **Trade-off:** - If `create_many` fails partway through, you could have an execution_process with only some of its repo_states. This is a recoverable state (orphaned repo_states would just be ignored), vs. the previous bug where WebSocket events were silently lost. * remove junk files (#1528) * format, lint * Version back to 10.13.0 * Fix i18n check script failures (Vibe Kanban) (#1530) * The i18n check script now passes. Here's a summary of what I fixed: **1. Added missing translation keys to tasks.json** (es, ja, ko, zh-Hans): - `repos.changeRepo.dialog.*` - Dialog for changing repository - `repos.selector.*` - Repository selector translations - `repoBranchSelector.label` - Base branch label - `attemptRepoSelector.*` - Attempt repository selector translations **2. Internationalized FolderPickerDialog.tsx**: - Added `useTranslation` hook from react-i18next - Replaced 8 hardcoded strings with translation calls: - Legend text - "Enter path manually:" label - "Go" button - "Search current directory:" label - "Select Current" button - "git repo" badge - "Cancel" button - "Select Path" button **3. Added folderPicker translations** to common.json for all locales (en, es, ja, ko, zh-Hans) All three checks now pass: - ✅ No new literal strings introduced (134 violations in both base and PR) - ✅ No duplicate keys found in JSON files - ✅ Translation keys are consistent across locales * Cleanup script changes for task attempt f9e7896e-d9a5-4030-a968-f1d976fbf83e * Add analytics events for multi-repo projects (Vibe Kanban) (#1532) * The code compiles successfully. Let me provide a summary of the changes made: ## Summary I've implemented analytics tracking for multi-repo projects with the following changes: ### 1. Enhanced `task_attempt_started` event **File:** `crates/server/src/routes/task_attempts.rs:185-196` - Added `repository_count` property to track how many repos are involved when starting a task attempt ### 2. Added `project_repository_added` event **File:** `crates/server/src/routes/projects.rs:443-465` - New event fired when a repository is added to an existing project - Properties: `project_id`, `repository_id`, `repository_count`, `is_multi_repo` ### 3. Added `project_repository_removed` event **File:** `crates/server/src/routes/projects.rs:495-507` - New event fired when a repository is removed from a project - Properties: `project_id`, `repository_id` All changes follow the existing analytics patterns and use `track_if_analytics_allowed` to respect user consent settings. * Done. I've removed the `repo_count` and `is_multi_repo` properties from the `project_repository_added` event, and removed the comment from `delete_project_repository`. * Add comprehensive logging for multi-repo add/remove operations (Vibe Kanban) (#1531) * Done! I've added better logging for multi-repo actions. Here's a summary of the changes: **`crates/services/src/services/project.rs`:** - `add_repository()`: Added `debug!` at start and `info!` on success with project_id, repo_id, and path - `delete_repository()`: Added `debug!` at start and `info!` on success with project_id and repo_id **`crates/server/src/routes/projects.rs`:** - `add_project_repository()`: Added `debug!` at start, plus `warn!` for each error case (path not found, not a directory, not a git repo, duplicate name, duplicate path) - `delete_project_repository()`: Added `debug!` at start and `warn!` when repo not found * Cleanup script changes for task attempt dccec713-4282-499e-af6d-670db5d54aba * Let me provide a summary of the changes made: (#1529) I've added proper multi-repo support to the MCP task server (`crates/server/src/mcp/task_server.rs`). Here are the changes: 1. **Added `RepoSummary` struct** (lines 43-61) - A new type to expose repository information in MCP responses: - `id`: Repository UUID - `name`: Repository name (directory name) - `display_name`: Human-readable display name 2. **Updated `ProjectSummary`** (lines 63-87) - Now includes a `repositories` field: - Changed `from_project()` to `from_project_with_repos()` to accept repos - Projects now expose their associated repositories 3. **Added `McpRepoContext` struct** (lines 255-265) - Per-repository context info: - `repo_id`: Repository UUID - `repo_name`: Repository name - `repo_display_name`: Repository display name - `target_branch`: Target branch for this repo in this attempt 4. **Updated `McpContext`** (lines 267-279) - Added multi-repo support: - Added `attempt_repos: Vec<McpRepoContext>` field for per-repo target branches - Kept `attempt_target_branch` for backward compatibility (first repo's branch) 5. **Updated `fetch_context_at_startup`** (lines 306-391) - Now fetches repo details: - Makes additional API call to `/api/task-attempts/{id}/repos` - Populates `attempt_repos` with full repo info (name, display_name, target_branch) 6. **Updated `list_projects`** (lines 559-587) - Now fetches repositories for each project: - For each project, calls `/api/projects/{id}/repositories` - Includes repos in the `ProjectSummary` response 7. **Added `Deserialize` to `RepoWithTargetBranch`** (line 29) - Required for JSON deserialization in the MCP server - All changes are additive (new fields in responses) - `attempt_target_branch` is still populated with the first repo's branch - Existing MCP clients will continue to work rm bindings Cleanup script changes for task attempt 4f0c19cd-4d9c-42d6-a1fd-7d52063ff5d1 All done! Here's a summary of the final implementation based on your review feedback: 1. **Added `RepoSummary` struct** - Exposes repo `id`, `name`, `display_name` for MCP clients 2. **Updated `ProjectSummary`** - Added `repositories: Vec<RepoSummary>` field so `list_projects` returns repos 3. **Updated `McpContext`** - Replaced single `attempt_target_branch` with `attempt_target_branches: HashMap<Uuid, String>` mapping repo IDs to their target branches 4. **Simplified `fetch_context_at_startup`** - Builds the HashMap directly from `ctx.attempt_repos` (no additional API call needed) 5. **Updated `list_projects`** - Fetches repositories for each project 6. **Added `Deserialize` to `RepoWithTargetBranch`** - For consistency (though no longer used after simplification) Cleanup script changes for task attempt 4f0c19cd-4d9c-42d6-a1fd-7d52063ff5d1 Done! The implementation is now focused on just the multi-repo context support: 1. **Updated `McpContext`** - Replaced `attempt_target_branch: String` with `attempt_target_branches: HashMap<Uuid, String>` mapping repo IDs to their target branches 2. **Simplified `fetch_context_at_startup`** - Builds the HashMap directly from `ctx.attempt_repos` 3. **Reverted `list_projects`** - Back to original simple implementation (no repo fetching) 4. **Reverted `ProjectSummary`** - Removed `repositories` field 5. **Removed `RepoSummary`** - No longer needed 6. **Added `Deserialize` to `RepoWithTargetBranch`** - For consistency Done! The implementation now includes proper repo info alongside target branches. Here's what `McpContext.attempt_repos` will look like: ```json { "attempt_repos": [ { "repo_id": "uuid-1", "repo_name": "frontend", "target_branch": "main" }, { "repo_id": "uuid-2", "repo_name": "backend", "target_branch": "develop" } ] } ``` Done! The implementation now includes proper repo info alongside target branches. Here's what `McpContext.attempt_repos` will look like: ```json { "attempt_repos": [ { "repo_id": "uuid-1", "repo_name": "frontend", "target_branch": "main" }, { "repo_id": "uuid-2", "repo_name": "backend", "target_branch": "develop" } ] } ``` **Changes made:** 1. `crates/db/src/models/task_attempt.rs` - Changed `TaskAttemptContext.attempt_repos` from `Vec<AttemptRepo>` to `Vec<RepoWithTargetBranch>` and updated `load_context` to use `find_repos_with_target_branch_for_attempt` 2. `crates/server/src/mcp/task_server.rs` - Simplified `fetch_context_at_startup` to map directly from `ctx.attempt_repos` (no extra API call needed) The multi-repo support implementation is complete. Here's a summary of what was done: **Changes to `McpContext` (get_context tool):** - Added `McpRepoContext` struct with `repo_id`, `repo_name`, and `target_branch` - `McpContext.attempt_repos` is now `Vec<McpRepoContext>` providing full repo info for each repo in the attempt **Changes to `start_task_attempt` tool:** - Added `McpAttemptRepoInput` struct with `repo_id` and `base_branch` - Changed `StartTaskAttemptRequest` from single `base_branch: String` to `repos: Vec<McpAttemptRepoInput>` - Callers now specify the base branch per-repo **Changes to db layer:** - `TaskAttemptContext.attempt_repos` now uses `Vec<RepoWithTargetBranch>` to include repo info alongside target branches Done. Added the new `list_repos` MCP tool: - **New structs**: `McpRepoSummary`, `ListReposRequest`, `ListReposResponse` - **New tool**: `list_repos(project_id)` - returns repos for a given project - Uses the existing `/api/projects/{id}/repositories` endpoint The tool follows the same pattern as `list_tasks` - takes a `project_id` and returns a list with count. Done. Added `list_repos` to the server instructions. * Allow follow-ups after PR merge (Vibe Kanban) (#1534) * All checks pass. Let me summarize the changes made: ## Summary I've implemented the changes to allow follow-ups after PR merge while still preventing duplicate PRs/merges from the same branch. ### Changes Made: 1. **`frontend/src/components/tasks/TaskFollowUpSection.tsx`** - Removed the `hasMergedPR` check from `canTypeFollowUp` function (lines 337-345) - Removed the unused `Merge` type import - Now users can type and send follow-ups even after a PR has been merged for the selected repo 2. **`frontend/src/components/tasks/Toolbar/GitOperations.tsx`** - Removed `mergeInfo.hasMergedPR` from the rebase button's disabled condition (line 520) - The merge and PR/Push buttons still have `hasMergedPR` checks to prevent duplicate operations on the same branch - Users can now rebase to get latest changes and continue working after merge ### What's now allowed after PR merge: - ✅ Typing and sending follow-up messages - ✅ Rebasing to get latest changes from target branch ### What's still blocked after PR merge (for the same repo/branch): - ❌ Direct merge (can't merge again) - ❌ Create PR / Push to PR (can't create/push to a merged PR) * Cleanup script changes for task attempt 6f608351-a523-4137-a969-e717984fb7cc * use git worktree move to move to temp dir; cleanup temp after successful migration (#1541) * Refactor workspace cleanup and remove unused worktree_deleted field (Vibe Kanban) (#1537) * All done! Here's a summary of the refactoring: - Added `AttemptWithRef` struct with `attempt_id` and `container_ref` fields - Added `ContainerInfo` struct with `attempt_id`, `task_id`, and `project_id` fields (with `Serialize` and `TS` derives) - Renamed `find_by_worktree_deleted` → `find_by_workspace_deleted`, returns `Vec<AttemptWithRef>` - Updated `find_expired_for_cleanup` to return `Vec<AttemptWithRef>` with a simplified query (removed JOINs to `attempt_repos`/`repos`) - Updated `resolve_container_ref` to return `ContainerInfo` instead of tuple - Renamed `check_externally_deleted_worktrees` → `check_externally_deleted_workspaces` - Renamed `spawn_worktree_cleanup` → `spawn_workspace_cleanup` - Updated `cleanup_expired_attempt` to: - Take `&AttemptWithRef` instead of individual parameters - Fetch repos via `AttemptRepo::find_repos_for_attempt` (like `delete_inner`) - Use `WorkspaceManager::cleanup_workspace` instead of `WorktreeManager::cleanup_worktree` - Updated all log messages to use "workspace" terminology - Removed local `ContainerInfo` definition - Imported `ContainerInfo` from `db::models::task_attempt` - Updated handlers to use the struct directly The refactoring improves code clarity by: - Using named structs instead of anonymous tuples - Updating terminology from "worktree" to "workspace" where appropriate - Making the cleanup flow consistent with `delete_inner` (using `WorkspaceManager::cleanup_workspace`) Cleanup script changes for task attempt 28d13fd0-f4c3-4f33-9a9a-354f81d81d1a * All done. Here's a summary of all the changes made: 1. **Added new structs** to `crates/db/src/models/task_attempt.rs`: - `AttemptWithRef` - holds `attempt_id` and `container_ref` - `ContainerInfo` - holds `attempt_id`, `task_id`, `project_id` 2. **Updated cleanup methods** to return named structs instead of tuples: - `find_expired_for_cleanup` now returns `Vec<AttemptWithRef>` - `resolve_container_ref` now returns `ContainerInfo` 3. **Updated `container.rs`** to use `WorkspaceManager::cleanup_workspace` (matching `delete_inner` pattern) 4. **Updated `containers.rs`** route to import `ContainerInfo` from db crate 5. **Created migration** `20251215145026_drop_worktree_deleted.sql` to drop the column 6. **Removed `worktree_deleted`** from: - `TaskAttempt` struct - All SELECT queries - INSERT query in `create()` 7. **Removed `mark_worktree_deleted` method** entirely 8. **Removed `check_externally_deleted_workspaces`** function (no longer needed) 9. **Renamed `find_by_workspace_deleted`** to `find_with_container_ref` (now just filters by `container_ref IS NOT NULL`) 10. **Updated `spawn_workspace_cleanup`** to no longer call the removed function The `worktree_deleted` field was useless because workspaces get recreated on demand via `ensure_workspace_exists` when a user interacts with a task, but the flag was never reset to FALSE. Now cleanup just relies on `container_ref IS NOT NULL` and the 72-hour expiration logic. Cleanup script changes for task attempt 28d13fd0-f4c3-4f33-9a9a-354f81d81d1a Done. Removed the unused `find_with_container_ref` method. Done. Fixed all three issues: 1. Simplified the migration to just `ALTER TABLE task_attempts DROP COLUMN worktree_deleted;` 2. Removed the `TS` derive from `ContainerInfo` 3. Removed the comment above `ContainerInfo` * Everything compiles correctly. The refactoring is complete. Here's a summary of all the changes made: **Summary of cleanup workspace refactor:** 1. **Removed `worktree_deleted` field and related functionality** - The field was never reset to false after workspace recreation, making it useless. Removed: - `worktree_deleted` column via migration `20251215145026_drop_worktree_deleted.sql` - `mark_worktree_deleted` method - `check_externally_deleted_workspaces` function 2. **Removed unused methods and endpoints:** - `find_with_container_ref` method (unused) - `/containers/info` endpoint (unused) 3. **Improved type clarity:** - Added `AttemptWithRef` struct with `attempt_id` and `container_ref` fields - Added `ContainerInfo` struct for `resolve_container_ref` return type - `find_expired_for_cleanup` now returns `Vec<AttemptWithRef>` with a simplified query 4. **Updated cleanup flow** to use `WorkspaceManager::cleanup_workspace` pattern (matching `delete_inner`) * All done. Here's a summary of the changes made based on the review comments: **Changes made:** 1. **Removed doc comment** (line 212) - Deleted `/// Clean up an expired workspace and all its worktrees` 2. **Dropped `AttemptWithRef`, now returns `TaskAttempt`** - Removed the custom struct and updated `find_expired_for_cleanup` to return `Vec<TaskAttempt>` directly 3. **Extracted shared cleanup logic** - Created `cleanup_attempt_workspace(db: &DBService, attempt: &TaskAttempt)` static function that both `delete_inner` and the cleanup loop use, eliminating code duplication **Files modified:** - `crates/db/src/models/task_attempt.rs` - Removed `AttemptWithRef`, updated `find_expired_for_cleanup` return type - `crates/local-deployment/src/container.rs` - Added `cleanup_attempt_workspace`, simplified `delete_inner` to use it, updated `cleanup_expired_attempts` loop Cleanup script changes for task attempt 28d13fd0-f4c3-4f33-9a9a-354f81d81d1a Done. Added `clear_container_ref` method and call it after cleanup. Now cleanup will: 1. Clean up the workspace files 2. Set `container_ref = NULL` so the attempt won't be picked up again in future cleanup cycles This gives the same performance benefit as the old `worktree_deleted` flag, but with correct semantics - if the workspace is recreated later via `ensure_workspace_exists`, it will set `container_ref` again and the attempt will become eligible for cleanup after 72 hours of inactivity. Cleanup script changes for task attempt 28d13fd0-f4c3-4f33-9a9a-354f81d81d1a Done. Now `ensure_container_exists` will call `create()` if `container_ref` is NULL, which regenerates the workspace using the stored `task_attempt.branch`. The branch is preserved in the database so it stays the same. * All tasks are complete. The implementation now includes: 1. **`ensure_project_files_impl`** in `copy.rs` - Like `copy_project_files_impl` but skips files that already exist at target with same size 2. **`ensure_images_by_task`** in `image.rs` - Like `copy_images_by_task_to_worktree` but skips images that already exist 3. **`ensure_project_files`** wrapper in `container.rs` - Private async method with timeout that calls `ensure_project_files_impl` 4. **Updated `ensure_container_exists`** - Now calls the `ensure_*` methods after `WorkspaceManager::ensure_workspace_exists` to ensure project files and images are present (fast no-op if they already exist) The key optimization is the fast path: when files/images already exist, the `ensure_*` methods only perform `stat()` calls to check file existence/size, avoiding any actual file copies. This keeps `ensure_container_exists` fast for the common case while fixing the bug where recreated workspaces (after cleanup set `container_ref = NULL`) wouldn't have their project files and images re-copied. Done. Here's a summary of the refactoring: **Changes made:** 1. **`crates/local-deployment/src/copy.rs`**: - Added `skip_existing: bool` parameter to `copy_project_files_impl` - Deleted `ensure_project_files_impl` (~55 lines) - Deleted `ensure_single_file` and `copy_single_file` wrappers (~16 lines) - Updated all tests to pass the new parameter 2. **`crates/services/src/services/image.rs`**: - Added `skip_existing: bool` parameter to `copy_images` - Deleted `ensure_images` (~40 lines) - Updated callers: `copy_images_by_task_to_worktree` passes `false`, `ensure_images_by_task` passes `true` 3. **`crates/local-deployment/src/container.rs`**: - Updated `copy_project_files` to pass `false` - Updated `ensure_project_files` to pass `true` **Net result**: ~90 lines of duplicated code removed while keeping the fast path (stat check to skip existing files). Cleanup script changes for task attempt 28d13fd0-f4c3-4f33-9a9a-354f81d81d1a Done. Here's a summary of the refactoring: **Changes made:** 1. **`crates/local-deployment/src/copy.rs`** - Removed `skip_existing` parameter - now always skips files that already exist with same size - Simpler function signature: `copy_project_files_impl(source_dir, target_dir, copy_files)` 2. **`crates/services/src/services/image.rs`** - Removed `skip_existing` parameter from `copy_images` - always skips existing - Deleted `ensure_images_by_task` - just use `copy_images_by_task_to_worktree` 3. **`crates/db/src/models/attempt_repo.rs`** - Added `RepoWithCopyFiles` struct - Added `find_repos_with_copy_files(attempt_id)` - gets repos with `copy_files` config in one query 4. **`crates/local-deployment/src/container.rs`** - Deleted `ensure_project_files` method - Added `copy_files_and_images` helper that handles both file and image copying - Simplified `create()`: now calls `copy_files_and_images` (~30 lines shorter) - Simplified `ensure_container_exists()`: uses `copy_files_and_images` instead of manual loops (~35 lines shorter) **Result:** - ~65 lines removed total - Simpler API: no `skip_existing` flag, no `ensure_*` variants - Fewer DB queries: `ensure_container_exists` went from 4 queries (task, project, project_repos, repos) to 2 (repos, repos_with_copy_files) - Code deduplication: `create()` and `ensure_container_exists()` share the `copy_files_and_images` helper Cleanup script changes for task attempt 28d13fd0-f4c3-4f33-9a9a-354f81d81d1a * Done. The fix is cleaner: 1. **Removed the `create()` call** when `container_ref` is NULL 2. **Regenerate workspace path deterministically** using the same logic as `create()` - `dir_name_from_task_attempt(attempt_id, task_title)` 3. **Use `ensure_workspace_exists`** which handles existing branches properly (doesn't try to create them) 4. **Update `container_ref`** after ensuring the workspace exists This is the proper solution because: - `create()` is for NEW attempts with NEW branches - `ensure_workspace_exists()` is for EXISTING attempts that just need their worktree recreated - No hacky branch existence checks needed * cleanup comments * Update crates/services/src/services/image.rs Co-authored-by: Gabriel Gordon-Hall <gabriel@bloop.ai> * Update crates/services/src/services/image.rs Co-authored-by: Gabriel Gordon-Hall <gabriel@bloop.ai> * Revert rename --------- Co-authored-by: Gabriel Gordon-Hall <gabriel@bloop.ai> * Inline delete_inner into ContainerService delete method (Vibe Kanban) (#1544) * Done. Here's a summary of the changes: **Changes made:** 1. **`crates/services/src/services/container.rs`:** - Removed `delete_inner` trait method declaration - Added `cleanup_attempt_workspace` as a new trait method - Modified `delete` to call `self.cleanup_attempt_workspace(task_attempt).await` instead of `self.delete_inner(task_attempt).await` 2. **`crates/local-deployment/src/container.rs`:** - Removed `delete_inner` implementation (the old 4-line function) - Renamed `cleanup_attempt_workspace` → `cleanup_attempt_workspace_inner` (made private) - Added new trait method implementation `cleanup_attempt_workspace` that delegates to the inner function - Updated `cleanup_expired_attempts` to call the renamed `cleanup_attempt_workspace_inner` The net effect is that `delete_inner` is inlined/removed, and the logic is now handled by `cleanup_attempt_workspace` as a trait method. * Done. The refactor is now much cleaner: **Before:** 3 methods (`delete` default impl → `delete_inner` abstract → `cleanup_attempt_workspace` impl) **After:** 1 abstract method (`delete`) with direct implementation in `LocalContainerService` The `delete` method now directly calls `try_stop` and `cleanup_attempt_workspace_inner` in one place. * Done. Renamed `cleanup_attempt_workspace_inner` back to `cleanup_attempt_workspace` in all 3 places. Build passes. * update copy for scripts (#1535) * Fix orphan cleanup to properly handle workspaces containing multiple worktrees (Vibe Kanban) (#1545) * All done. Here's a summary of the changes made: Fixed the orphan cleanup bug by moving the logic from `LocalContainerService` to `WorkspaceManager` and properly handling workspaces (which contain multiple worktrees). 1. **`crates/services/src/services/workspace_manager.rs`**: - Added `cleanup_orphan_workspaces()` - public method that scans the workspace base directory for orphaned workspaces and cleans them up - Added `cleanup_workspace_without_repos()` - helper method that cleans up a workspace by discovering worktrees (subdirectories with `.git` files) and cleaning each one properly using `WorktreeManager::cleanup_worktree()` 2. **`crates/local-deployment/src/container.rs`**: - Removed the broken `cleanup_orphaned_worktrees()` method (was incorrectly treating workspaces as worktrees) - Updated `spawn_workspace_cleanup()` to call `WorkspaceManager::cleanup_orphan_workspaces()` instead - Removed unused imports (`WorktreeCleanup`, `WorktreeManager`) The old code was passing workspace paths (e.g., `/worktrees/abc-task-title/`) directly to `WorktreeManager::cleanup_worktree()`, which expected individual worktree paths (e.g., `/worktrees/abc-task-title/repo-name/`). The new code correctly: 1. Identifies orphaned **workspaces** by checking if the workspace path exists in any task attempt's `container_ref` 2. For each orphaned workspace, scans its subdirectories for worktrees (directories with `.git` files) 3. Cleans up each worktree properly using `WorktreeManager::cleanup_worktree()` 4. Finally removes the workspace directory itself * Done. I've added `cleanup_suspected_worktree()` to `WorktreeManager` and updated `cleanup_workspace_without_repos()` in `WorkspaceManager` to use it. This keeps the worktree detection and cleanup logic in the appropriate manager. * Comments * lint * invalidate project queries on add or delete repo (#1557) --------- Co-authored-by: Alex Netsch <alex@bloop.ai> |
||
|
|
7f9f1433af |
Store binaries in Cloudflare R2 instead of bundling in npm package (Vibe Kanban) (#1554)
* All changes are complete. Let me provide a summary of what was implemented:
## Summary
I've implemented the Cloudflare R2 binary storage feature. Here are the changes:
### Files Created
- **`npx-cli/bin/download.js`** - New module that handles downloading binaries from R2 with:
- SHA-256 checksum verification
- Progress reporting
- Redirect handling
- Local caching in `~/.vibe-kanban/bin/{version}/{platform}/`
### Files Modified
1. **`npx-cli/bin/cli.js`**
- Converted to async for on-demand downloads
- Downloads binaries from R2 instead of using bundled files
- Non-blocking update check shows notice if newer version available
- Caches binaries locally per version/platform
2. **`npx-cli/package.json`**
- Removed `"dist"` from `files` array (package now ~15KB instead of ~150MB)
3. **`.github/workflows/pre-release.yml`**
- Added new `upload-to-r2` job that:
- Downloads all platform packages from artifacts
- Generates manifest.json with SHA-256 checksums for each binary
- Uploads binaries to R2 at `binaries/v{version}/{platform}/{binary}.zip`
- Updates global manifest with latest version
- Modified `create-prerelease` job to:
- Depend on `upload-to-r2` instead of `package-npx-cli`
- Inject R2 public URL into download.js via `sed`
- No longer download/bundle binaries in the npm package
### Required GitHub Secrets
You'll need to add these secrets to your repository:
- `R2_BINARIES_ACCESS_KEY_ID`
- `R2_BINARIES_SECRET_ACCESS_KEY`
- `R2_BINARIES_ENDPOINT` (e.g., `https://<account_id>.r2.cloudflarestorage.com`)
- `R2_BINARIES_BUCKET`
- `R2_BINARIES_PUBLIC_URL` (the public URL for your R2 bucket, e.g., custom domain or public R2 URL)
### R2 Bucket Structure
```
binaries/
manifest.json # {"latest": "0.0.135"}
v0.0.135/
manifest.json # Per-version manifest with checksums
linux-x64/
vibe-kanban.zip
vibe-kanban-mcp.zip
vibe-kanban-review.zip
...
```
* Done! I've updated the implementation to use full tags (with timestamps) instead of just version numbers. Here's a summary of what changed:
## Changes Made
1. **`npx-cli/bin/download.js`**:
- Added `BINARY_TAG` constant (placeholder `__BINARY_TAG__`)
- Changed `ensureBinary()` to use `BINARY_TAG` instead of accepting version as parameter
- Cache paths now use the tag: `~/.vibe-kanban/bin/{tag}/{platform}/`
2. **`npx-cli/bin/cli.js`**:
- Updated to import `BINARY_TAG` from download.js
- Cache directory now uses `BINARY_TAG` instead of `CLI_VERSION`
- Updated `ensureBinary()` call to not pass version
3. **`.github/workflows/pre-release.yml`**:
- `upload-to-r2` job now uses `new_tag` instead of `new_version` for R2 paths
- Binaries are stored at `binaries/{tag}/{platform}/{binary}.zip`
- Global manifest stores the full tag: `{"latest": "v0.0.135-20251215122030"}`
- `create-prerelease` job now injects both `__R2_PUBLIC_URL__` and `__BINARY_TAG__`
This allows multiple pre-releases to coexist in R2 (e.g., `v0.0.135-20251215122030` and `v0.0.135-20251215100000`), making rollbacks easy.
* chore: bump version to 0.0.136
---------
Co-authored-by: GitHub Action <action@github.com>
|
||
|
|
e16602cc0f | Downgrade version | ||
|
|
8a623ee924 |
Larger runner (#1552)
* Use a larger ubuntu runner * chore: bump version to 0.0.136 --------- Co-authored-by: Solomon <abcpro11051@disroot.org> Co-authored-by: GitHub Action <action@github.com> |
||
|
|
5710cc3371 | chore: bump version to 0.0.135 | ||
|
|
0e57cf3440 |
Approvals for ACP-based executors (#1511)
* Approvals for ACP-based executors Gemini, Qwen, and Opencode * set all permissions to "ask" * use `normalize_unified_diff` in other log normalizers |
||
|
|
bb2404e199 |
upgrade codex sdk (#1504)
* update codex sdk * show context-compacted message |
||
|
|
dffa4d9621 | chore: bump version to 0.0.134 | ||
|
|
5d5d882b8e | gpt-5.2 (#1503) | ||
|
|
b37672a475 | chore: bump version to 0.0.133 | ||
|
|
84d80659b3 | normalize tool display; remove backticks (#1482) | ||
|
|
d0392e6d5e | chore: bump version to 0.0.132 | ||
|
|
aee6ac35b8 |
Opencode ACP (#1471)
* Opencode ACP switch opencode to ACP. Simplifies the implementation and adds fork/retry support. * display relative file path for read tool * remove empty assitant messages |
||
|
|
1ee05ea862 | normalize cursor todo items (#1473) | ||
|
|
0b1af7e38a | Bump Copilot-cli version (#1469) | ||
|
|
08d8848346 | remove redundant or unused deps (#1462) | ||
|
|
e28e25720a | add opus variant for claude code (#1452) | ||
|
|
7da884bc3a |
Add environment variables to executor profiles (#1444)
* The implementation is complete. Here's a summary of what was done:
Successfully added environment variable configuration support to executor profiles. Users can now configure env vars like this:
```json
{
"CLAUDE_CODE": {
"DEFAULT": {
"CLAUDE_CODE": {
"env": {
"ANTHROPIC_AUTH_TOKEN": "...",
"ANTHROPIC_BASE_URL": "https://api.z.ai/api/anthropic",
"API_TIMEOUT_MS": "3000000",
"ANTHROPIC_DEFAULT_SONNET_MODEL": "glm-4.6"
}
}
}
}
}
```
1. **`crates/executors/src/command.rs`**:
- Added `env: Option<HashMap<String, String>>` to `CmdOverrides` struct
- Added `apply_env_vars()` helper function
2. **Updated all executors to apply env vars**:
- `claude.rs` - `spawn_internal()`
- `codex.rs` - `spawn()`
- `amp.rs` - `spawn()` and `spawn_follow_up()`
- `droid.rs` - module-level `spawn()` function
- `copilot.rs` - both spawn methods
- `cursor.rs` - both spawn methods
- `opencode.rs` - both spawn methods
- `acp/harness.rs` - `spawn_with_command()` and `spawn_follow_up_with_command()` (used by Gemini/Qwen)
3. **Type Generation**: All JSON schemas and TypeScript types now include the `env` field
This allows users to set environment variables for all supported coding agents (Claude Code, Codex, Gemini, Amp, Droid, Copilot, Cursor, OpenCode, Qwen) through profile configuration, eliminating the need for external tools like claude-code-router.
* The implementation is complete. Here's a summary of what was done:
## Summary
### Backend Changes
- Added `env: Option<HashMap<String, String>>` field to `CmdOverrides` struct in `crates/executors/src/command.rs`
- Created `apply_env_vars()` helper function to inject env vars into `tokio::process::Command`
- Updated all 9 executors to apply env vars during spawn:
- claude.rs, codex.rs, amp.rs, droid.rs, copilot.rs, cursor.rs, opencode.rs
- gemini.rs and qwen.rs (via ACP harness)
- Modified ACP harness signature to accept `Option<&CmdOverrides>`
### Frontend Changes
- Created `KeyValueField.tsx` - custom RJSF field for editing key-value pairs
- Registered the field in `theme.ts`
- Added `uiSchema` to `ExecutorConfigForm.tsx` to use the custom field for `env`
### Generated Files
- `shared/types.ts` - TypeScript types updated with `env` field
- `shared/schemas/*.json` - All 9 executor schemas include `env` property
The environment variables UI will now appear in Settings > Agent Settings as a key-value editor with "Environment Variables" label and description. Users can add/remove/edit env vars that will be passed to the CLI execution environment.
* cleanup env structs
* fix form
* fmt
* remove mise.toml
* fmt
* Seprate config form per selected variant
---------
Co-authored-by: Louis Knight-Webb <louis@bloop.ai>
Co-authored-by: Solomon <abcpro11051@disroot.org>
|
||
|
|
e1c9c15f43 | chore: bump version to 0.0.131 | ||
|
|
d72ec43d3b | Auto approve in plan mode (#1450) | ||
|
|
9c434822d6 | chore: bump version to 0.0.130 | ||
|
|
d81be475c5 | chore: bump version to 0.0.129 | ||
|
|
6805be6962 | chore: bump version to 0.0.128 | ||
|
|
2b11040d07 |
Upgrade Gemini and ACP (#1431)
* Upgrade Gemini and ACP Upgrade ACP SDK version to the latest. Upgrade Gemini-CLI to the latest working version. * fmt |
||
|
|
a369cec373 |
Inject ENV vars into shell (vibe-kanban) (#1426)
* The implementation is complete. Here's a summary of what was done: I've implemented environment variable injection when launching coding agents. The following environment variables are now injected into the shell when an executor is spawned: - `VK_PROJECT_NAME` - Name of the project - `VK_TASK_ID` - UUID of the task - `VK_ATTEMPT_ID` - UUID of the task attempt - `VK_ATTEMPT_BRANCH` - Branch name for the attempt (e.g., `vk/branch-name`) 1. **Created `crates/executors/src/env.rs`** - New module containing `ExecutionEnv` struct with: - `HashMap<String, String>` to store env vars - `insert()` method to add env vars - `apply_to_command()` method to apply env vars to a `tokio::process::Command` 2. **Updated `crates/executors/src/actions/mod.rs`** - Modified `Executable` trait to accept `Option<&ExecutionEnv>` parameter 3. **Updated `crates/executors/src/executors/mod.rs`** - Modified `StandardCodingAgentExecutor` trait's `spawn()` and `spawn_follow_up()` to accept `Option<&ExecutionEnv>` parameter 4. **Updated all action implementations**: - `coding_agent_initial.rs` - forwards env to agent spawn - `coding_agent_follow_up.rs` - forwards env to agent spawn - `script.rs` - applies env vars to script commands 5. **Updated all executor implementations** to apply env vars: - `claude.rs`, `amp.rs`, `gemini.rs`, `codex.rs`, `qwen.rs`, `droid.rs`, `cursor.rs`, `copilot.rs`, `opencode.rs` - Also updated `acp/harness.rs` which is used by gemini and qwen 6. **Updated `crates/local-deployment/src/container.rs`** - Modified `start_execution_inner()` to: - Load task and project context from database - Create `ExecutionEnv` with the four `VK_*` variables - Pass the env to `executor_action.spawn()` * Cleanup script changes for task attempt 4dc9b26c-c637-465a-ae71-644c828ea8f7 * The refactoring is complete. I've successfully updated the `env` parameter from `Option<&ExecutionEnv>` to `&ExecutionEnv` across all relevant files: **Summary of changes:** 1. **Trait updates:** - `Executable` trait in `crates/executors/src/actions/mod.rs` - `StandardCodingAgentExecutor` trait in `crates/executors/src/executors/mod.rs` 2. **Action implementations:** - `coding_agent_initial.rs` - `coding_agent_follow_up.rs` - `script.rs` 3. **Executor implementations (all 10 executors):** - `claude.rs` - spawn, spawn_follow_up, spawn_internal - `amp.rs` - spawn, spawn_follow_up - `gemini.rs` - spawn, spawn_follow_up - `codex.rs` - spawn, spawn_follow_up, spawn_inner - `qwen.rs` - spawn, spawn_follow_up - `droid.rs` - spawn, spawn_follow_up, spawn_droid helper - `cursor.rs` - spawn, spawn_follow_up - `copilot.rs` - spawn, spawn_follow_up - `opencode.rs` - spawn, spawn_follow_up - `acp/harness.rs` - spawn_with_command, spawn_follow_up_with_command 4. **Call site:** - `container.rs` in `start_execution_inner` - changed `Some(&env)` to `&env` All tests pass (137 tests) and clippy reports no warnings. * All tests pass. The change is complete. **Summary**: Added `VK_PROJECT_ID` to the environment variables injected when launching coding agents. The environment variables now include: - `VK_PROJECT_NAME` - project name - `VK_PROJECT_ID` - project UUID (newly added) - `VK_TASK_ID` - task UUID - `VK_ATTEMPT_ID` - attempt UUID - `VK_ATTEMPT_BRANCH` - git branch name for the attempt * undo ts-rs bindings * All 147 tests pass. The rebase conflict has been resolved successfully. **Summary:** - Resolved the rebase conflict in `crates/local-deployment/src/container.rs` - The conflict was in the imports section where: - `origin/main` added `InterruptSender` to the executors import - Our branch added `env::ExecutionEnv` import - Resolution: Combined both changes to keep `ExecutionEnv` import AND `InterruptSender` - All tests pass (147 tests) The branch `vk/4dc9-inject-env-vars` is now rebased onto `origin/main` with the complete environment variable injection feature, including: - `VK_PROJECT_NAME` - `VK_PROJECT_ID` - `VK_TASK_ID` - `VK_ATTEMPT_ID` - `VK_ATTEMPT_BRANCH` * remove bindings (again) |
||
|
|
9f4fabc285 |
Add agent interrupts (#1408)
* Add interrupt sender to gracefully stop claude code * Remove debug logs * Lint * interrupt agent in read loop * rm comments * Revert claude client arch change |
||
|
|
7989168e7a |
bump amp (vibe-kanban) (#1422)
* I have bumped the `amp` version to `0.0.1764705684-g95eb77` in [crates/executors/src/executors/amp.rs](file:///private/var/folders/fr/0c4ky3392mb4yz5knw_wjdd00000gn/T/vibe-kanban/worktrees/5070-bump-amp/crates/executors/src/executors/amp.rs). I verified the changes by running `cargo check --workspace`, which passed successfully. * I have updated the `amp` version to `0.0.1764777697-g907e30` in [crates/executors/src/executors/amp.rs](file:///private/var/folders/fr/0c4ky3392mb4yz5knw_wjdd00000gn/T/vibe-kanban/worktrees/5070-bump-amp/crates/executors/src/executors/amp.rs). I verified the changes by running `cargo check --workspace` (after resolving a transient `libsqlite3-sys` build issue). |
||
|
|
c92b769fe1 | chore: bump version to 0.0.127 | ||
|
|
60caf9955f |
fix: No conversation found with session ID issue (#1400)
* fix: No conversation found with session ID issue Load session id after session is initialized. * fmt * claude: ignore session from message fragments * fix tests |
||
|
|
72f2ab1320 | chore: bump version to 0.0.126 | ||
|
|
41300de309 | chore: bump version to 0.0.125 | ||
|
|
1c380c7085 | Fix custom codex providers (#1393) | ||
|
|
c06c3a90f5 | Up codex version to 0.63.0, up codex protocol (#1382) | ||
|
|
14fe26f72d |
feat: Add setting to use Claude subscription when API key is detected (#1229)
* feat: Add setting to use Claude subscription when API key is detected This commit adds a new optional setting `use_claude_subscription` to the ClaudeCode executor configuration. When enabled, this setting removes the ANTHROPIC_API_KEY environment variable before spawning the Claude Code agent, ensuring that users with Claude Pro/Team subscriptions can opt to use their subscription instead of being charged API fees. ## Changes - Added `use_claude_subscription` optional field to the `ClaudeCode` struct - Implemented logic in `spawn_internal` to conditionally remove `ANTHROPIC_API_KEY` from the environment when the setting is enabled - Added tracing log when API key is removed for better debugging ## Implementation Details - The field is optional (`Option<bool>`) and defaults to `false` when not set, maintaining backward compatibility - Uses `#[serde(skip_serializing_if = "Option::is_none")]` to keep JSON clean - The setting is automatically exposed in the frontend via the JSON Schema auto-generation from Rust structs - TypeScript bindings are auto-generated via the `#[derive(TS)]` macro ## Benefits - Prevents unexpected API charges for users with Claude subscriptions - Gives users explicit control over authentication method - Backward compatible - existing configurations continue to work unchanged - No frontend changes needed - the setting appears automatically in the ExecutorConfigForm ## Related - Addresses feature request in discussion #1228 - Design document: https://github.com/potable-anarchy/vibe-kanban-launcher/blob/main/DESIGN_PROPOSAL.md * cleanups rename the config parameter to `disable_api_key`. regenerate type bindings. * suggest using the setting in the api-key warning --------- Co-authored-by: Solomon <abcpro11051@disroot.org> |
||
|
|
9dabff0752 | chore: bump version to 0.0.124 | ||
|
|
e6a55a9b6e | Bump claude code to 2.0.54 (#1377) | ||
|
|
b93cf5dacf | Parse tool_use_id in canusetool control request (#1370) | ||
|
|
17e0acc906 |
Bump amp (vibe-kanban) (#1372)
* Done. Updated Amp version to `0.0.1764081384-g1961a83` in [crates/executors/src/executors/amp.rs](file:///private/var/folders/m1/9q_ct1913z10v6wbnv54j25r0000gn/T/vibe-kanban/worktrees/ba61-bump-amp/crates/executors/src/executors/amp.rs#L36). * Updated to `0.0.1764081384-g1961a8`. |
||
|
|
43bfe63931 | chore: bump version to 0.0.123 | ||
|
|
581a4df645 |
chore: bump @anthropic-ai/claude-code to 2.0.53 (#1364)
🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-authored-by: Claude <noreply@anthropic.com> |
||
|
|
b50f9ddce3 |
Smooth codex login (#1155)
* Add codex setup helper Pass exit result Move codex setup to routes FIx fmt Fix finalize * Rename scriptcontext (vibe-kanban 79207902) Rename the gh cli script context to something more general and use it for installs in crates/server/src/routes/task_attempts/cursor_setup.rs Rename scriptcontext (vibe-kanban 79207902) Rename the gh cli script context to something more general and use it for installs in crates/server/src/routes/task_attempts/cursor_setup.rs Fmt * Fix missing overrides for codex |
||
|
|
fd5ef916b0 |
Display agent availability during onboarding (vibe-kanban) (#1352)
* Perfect! All the implementation is complete. Let me summarize what was done:
I've successfully implemented agent availability checking during onboarding, mirroring the existing IDE availability functionality. Here's what was added:
1. **New API endpoint** in `crates/server/src/routes/config.rs`:
- Added `CheckAgentAvailabilityQuery` struct with `executor: BaseCodingAgent`
- Added `CheckAgentAvailabilityResponse` struct with `available: bool`
- Added `check_agent_availability()` handler that uses the existing `check_availability()` method from the executors crate
- Registered route: `/api/agents/check-availability`
2. **Type generation** in `crates/server/src/bin/generate_types.rs`:
- Added the new types to the TypeScript generation list
3. **API client** in `frontend/src/lib/api.ts`:
- Added `checkAgentAvailability()` method to `configApi`
- Imported necessary types
4. **Custom hook** in `frontend/src/hooks/useAgentAvailability.ts`:
- Created hook that checks agent availability asynchronously
- Returns states: `'checking' | 'available' | 'unavailable' | null`
5. **UI component** in `frontend/src/components/AgentAvailabilityIndicator.tsx`:
- Visual indicator showing spinner (checking), checkmark (available), or warning icon (unavailable)
- Matches the style of `EditorAvailabilityIndicator`
6. **Onboarding dialog** in `frontend/src/components/dialogs/global/OnboardingDialog.tsx`:
- Added agent availability check using the hook
- Displays the availability indicator below the agent selection dropdown
- The backend leverages existing `check_availability()` method that checks for MCP config files
- Frontend pattern exactly mirrors the IDE availability checking for consistency
- All TypeScript types are auto-generated from Rust
- Both backend (Rust) and frontend (TypeScript) checks pass successfully
The implementation is complete and ready for use!
Cleanup script changes for task attempt 60b68945-b59c-46e1-895d-540a910f2904
Great! TypeScript compilation passes. The semantic color classes have been applied successfully. The component now uses:
- `text-success` for available agents (instead of `text-green-500`/`text-green-600`)
- `text-warning` for unavailable agents (instead of `text-orange-500`/`text-orange-600`)
This matches the semantic color conventions used throughout the codebase and will properly adapt to theme changes.
Excellent! I've implemented file modification time checking for Claude Code's credential file. Here's what the enhanced availability check does:
The `check_availability()` method for Claude Code now validates:
1. **File exists**: `~/.claude.json` must exist
2. **File is fresh**: Credentials file must have been modified within the last **60 days**
- If the file hasn't been touched in 60 days, it's considered stale and the agent is marked as unavailable
- This handles the case where credentials might have expired but the file still exists
- **OAuth refresh updates the file**: When credentials are refreshed (e.g., access token renewed), the file gets rewritten with new tokens, updating the modification time
- **Simple and fast**: Just a filesystem metadata check, no JSON parsing or API calls needed
- **Conservative threshold**: 60 days is reasonable - most OAuth refresh tokens are valid for 30-90 days, so if nothing has touched the file in 60 days, it's likely expired
- **False negatives possible**: If someone logged in 61 days ago and never used Claude Code since, it might still work (refresh token could be valid for 90 days), but we'll show it as unavailable
- **False positives possible**: File could be recently modified but tokens could be manually revoked server-side
- **Not foolproof**: The only way to be 100% certain is to make an authenticated API call
But as a heuristic for showing availability during onboarding, this is a solid approach that balances accuracy with performance.
Cleanup script changes for task attempt 60b68945-b59c-46e1-895d-540a910f2904
Perfect! All checks pass. Let me summarize what was implemented:
- Enhanced `check_availability()` to check `~/.claude.json` modification time
- Marks agent as unavailable if credential file is older than 60 days
- Uses `std::fs::metadata()` and `modified().elapsed()` to compute age
- `CheckAgentAvailabilityResponse` now includes `credential_last_modified: Option<i64>`
- Calls new trait method `get_credential_last_modified()` to get Unix timestamp
- Returns `None` if file doesn't exist or metadata can't be read
- Added `get_credential_last_modified()` to `StandardCodingAgentExecutor` trait
- Gets file metadata and converts modification time to Unix timestamp
- Works for all agents automatically (uses their `default_mcp_config_path()`)
- On mount, checks availability of ALL agents in parallel
- Filters to only available agents with credentials
- Sorts by most recent `credential_last_modified` timestamp
- Auto-selects the agent with freshest credentials
- Only runs if user hasn't already configured an agent
1. User opens onboarding dialog for the first time
2. Frontend checks all agents in parallel (CLAUDE_CODE, CODEX, CURSOR_AGENT, etc.)
3. Each returns `{ available: bool, credential_last_modified: timestamp }`
4. Frontend finds agent with most recent credential file modification
5. Auto-selects that agent (likely the one they used most recently)
6. User can still manually change if they want
This provides a smart default while still allowing user choice!
Perfect! All checks pass. Let me show you the final summary:
I've successfully implemented comprehensive agent availability checking with automatic selection of the most recently used agent. Here's what was added:
**`AvailabilityInfo` struct** (`crates/executors/src/executors/mod.rs:167-206`):
- Structured type with fields: `mcp_config_found`, `auth_config_found`, `auth_last_edited`
- `is_available()` method that determines availability from the signals
- Logic: Available if MCP config exists OR auth config exists and is < 60 days old
**Trait method** (`crates/executors/src/executors/mod.rs:229-241`):
- Added `get_availability_info() -> AvailabilityInfo` to `StandardCodingAgentExecutor`
- Default implementation checks only MCP config
- Updated `check_availability()` to use `get_availability_info().is_available()`
**Claude Code** (`crates/executors/src/executors/claude.rs:200-224`):
- Auth credentials: `~/.claude.json`
- Checks file existence and modification time
- Returns AvailabilityInfo with auth signals
**Codex** (`crates/executors/src/executors/codex.rs:168-195`):
- MCP config: `~/.codex/config.toml`
- Auth credentials: `~/.codex/auth.json`
- Checks both MCP and auth configs
**Gemini** (`crates/executors/src/executors/gemini.rs:96-123`):
- MCP config: `~/.gemini/settings.json`
- Auth credentials: `~/.gemini/oauth_creds.json`
- Checks both MCP and auth configs
**Backend API** (`crates/server/src/routes/config.rs:479-498`):
- Returns `credential_last_modified` timestamp from AvailabilityInfo
- Uses `info.is_available()` for the available field
**Frontend auto-selection** (`frontend/src/components/dialogs/global/OnboardingDialog.tsx:64-112`):
- On mount, checks all agents in parallel
- Filters to available agents with credentials
- **Sorts by most recent `credential_last_modified`**
- Auto-selects the agent used most recently
This provides intelligent defaults while respecting user choice!
Perfect! All checks pass. Here's what I implemented:
**Enhanced API Response** (`crates/server/src/routes/config.rs:472-481`):
```rust
pub struct CheckAgentAvailabilityResponse {
available: bool,
mcp_config_found: bool,
auth_config_found: bool,
credential_last_modified: Option<i64>,
}
```
Now returns all the availability signals from `AvailabilityInfo`.
**OnboardingDialog** (`frontend/src/components/dialogs/global/OnboardingDialog.tsx`):
1. **State for availability map** (lines 57-59):
- Stores full availability info for all agents
- Used for intelligent sorting
2. **Priority-based sorting** (lines 187-218):
- **Tier 1**: Agents with auth config - sorted by most recent first
- **Tier 2**: Agents with MCP config only - alphabetical within tier
- **Tier 3**: Other agents - alphabetical within tier
- **Fallback**: Alphabetical if no availability info loaded yet
```typescript
// Tier 1: Both have auth - sort by recency
if (infoA.auth_config_found && infoB.auth_config_found) {
return timeB - timeA; // Most recent first
}
// Tier 2: Auth trumps MCP-only
if (infoA.auth_config_found) return -1;
if (infoB.auth_config_found) return 1;
// Tier 3: Both have MCP only - alphabetical
if (infoA.mcp_config_found && infoB.mcp_config_found) {
return a.localeCompare(b);
}
```
The dropdown will now show agents in this order:
1. **Most recently authenticated agents first** (e.g., Claude Code used yesterday)
2. **Other authenticated agents** (e.g., Gemini used last week)
3. **Agents with only MCP config** (alphabetical)
4. **Agents with no config** (alphabetical)
This ensures users see their most relevant agents at the top of the list!
Cleanup script changes for task attempt 60b68945-b59c-46e1-895d-540a910f2904
Perfect! All checks pass. Here's the final implementation summary:
**Removed `auth_config_found` field** - simplified to just use `auth_last_edited.is_some()`:
**AvailabilityInfo** (`crates/executors/src/executors/mod.rs:167-201`):
```rust
pub struct AvailabilityInfo {
pub mcp_config_found: bool,
pub auth_last_edited: Option<i64>, // If Some(_), auth exists
}
```
**Implementations:**
- **Claude Code** (`claude.rs:200-212`): `~/.claude.json` → `auth_last_edited`
- **Codex** (`codex.rs:168-185`): MCP at `~/.codex/config.toml`, auth at `~/.codex/auth.json`
- **Gemini** (`gemini.rs:96-113`): MCP at `~/.gemini/settings.json`, auth at `~/.gemini/oauth_creds.json`
**CheckAgentAvailabilityResponse** (`config.rs:472-480`):
```rust
{
available: bool,
mcp_config_found: bool,
credential_last_modified: Option<i64>, // If Some, auth file exists
}
```
**Priority-based dropdown ordering** (`OnboardingDialog.tsx:191-230`):
1. **Tier 1**: Agents with auth (`credential_last_modified !== null`) - sorted by most recent
2. **Tier 2**: Agents with MCP only - alphabetical
3. **Tier 3**: Others - alphabetical
Users will see:
- **Top of list**: Their most recently authenticated agents (e.g., "logged into Claude Code yesterday")
- **Middle**: Agents with MCP config but no recent auth
- **Bottom**: Remaining agents alphabetically
The implementation is clean, simple, and provides intelligent defaults!
Cleanup script changes for task attempt 60b68945-b59c-46e1-895d-540a910f2904
Perfect! All checks pass. The fix is complete:
**The Problem:**
- Line 76 had: `if (!profiles || config?.executor_profile) return;`
- This meant availability was **never fetched** for users with existing config
- `agentAvailabilityMap` stayed empty, sorting defaulted to alphabetical
- Result: AMP appeared first
**The Solution:**
- Removed `config?.executor_profile` from the skip condition
- Now availability is **always fetched** to enable sorting
- Moved the `config?.executor_profile` check **inside** the function (line 116)
- Auto-selection only happens for new users, but sorting works for everyone
**Now the dropdown will show:**
1. **Agents with recent auth** - sorted by most recent first (e.g., Claude Code used yesterday)
2. **Agents with MCP config only** - alphabetical (e.g., Cursor with MCP setup)
3. **Other agents** - alphabetical
The priority-based sorting is now fully functional!
Perfect! Fixed. Both indicators now use the same semantic colors:
- `text-success` for available (instead of `text-green-500`/`text-green-600`)
- `text-warning` for unavailable (instead of `text-orange-500`/`text-orange-600`)
The colors will now match perfectly between IDE and coding agent availability indicators.
Great! TypeScript compiles successfully. The changes are complete:
**AgentAvailabilityIndicator** now has:
1. **Updated text:**
- "Available" → "Login Detected"
- "Not Found" remains "Not Found"
2. **Tooltips on hover:**
- **Login Detected**: "Recent authentication credentials found for this agent"
- **Not Found**: "No credentials detected. The agent may still work if configured differently."
3. **Visual improvements:**
- Added `cursor-help` class to show the help cursor on hover
- Wrapped with shadcn/ui `Tooltip` components
This provides clearer messaging and helpful context for users!
Perfect! The tooltips now have `className="z-[100]"` which should render them above the modal. The fix ensures the hover text is readable when shown within the onboarding dialog.
Perfect! The tooltips now use `z-[10000]` which is higher than the modal's `z-[9999]`, so they should appear on top and be readable.
Perfect! TypeScript compiles successfully. The agent availability indicator is now added to the AgentSettings page:
**AgentSettings.tsx** now has:
1. **Availability check** (line 65):
```typescript
const agentAvailability = useAgentAvailability(selectedExecutorType);
```
2. **Indicator display** (line 444):
```typescript
<AgentAvailabilityIndicator availability={agentAvailability} />
```
The indicator appears below the agent dropdown selector and shows:
- "Login Detected" with tooltip for agents with credentials
- "Not Found" with tooltip for agents without credentials
- Same styling and behavior as the onboarding dialog
This provides consistent availability feedback throughout the app!
Cleanup script changes for task attempt 60b68945-b59c-46e1-895d-540a910f2904
Excellent! TypeScript compiles successfully. The changes are complete:
**Reverted AgentSettings.tsx:**
- Removed the availability indicator from the Agent Settings page (which is for advanced profile configuration)
**Added to GeneralSettings.tsx:**
- **Line 43-44**: Imported `useAgentAvailability` hook and `AgentAvailabilityIndicator` component
- **Line 81**: Added hook call: `const agentAvailability = useAgentAvailability(draft?.executor_profile?.executor);`
- **Line 420**: Added indicator display: `<AgentAvailabilityIndicator availability={agentAvailability} />`
The agent availability indicator now appears in the **General Settings** page under the "Task Execution" section, right below the agent dropdown selector - matching the pattern of how editor availability is shown in the same settings page. This is the correct location since it's where users configure their default agent, not the advanced agent configuration page.
Cleanup script changes for task attempt 60b68945-b59c-46e1-895d-540a910f2904
Perfect! All checks pass. Comments removed and types regenerated successfully. The implementation is complete and clean.
Fix agent availability i18n (vibe-kanban dcd1fc06)
Run @scripts/check-i18n.sh until it passes. Make sure to check the script and set GITHUB_BASE_REF to main
Fix agent availability i18n (vibe-kanban 77795175)
Run @scripts/check-i18n.sh until it passes. Make sure to check the script and set GITHUB_BASE_REF to main
Currently agents.availability.loginDetected shows in the frontend so it's not using i18n correctly, it was added in the last commit though
Perfect! TypeScript compiles. Now the availability indicator shows three distinct states:
1. **"Login Detected"** - Has auth credentials (recent login)
2. **"Installation Found"** - Has MCP config but no auth (agent is installed)
3. **"Not Found"** - Neither auth nor MCP config found
This gives users much clearer information about what was actually detected!
Cleanup script changes for task attempt 60b68945-b59c-46e1-895d-540a910f2904
Perfect! Now the indicator shows three distinct states:
1. **"Login Detected"** (green check) - Has recent auth credentials
- Tooltip: "Recent authentication credentials found for this agent"
2. **"Installation Found"** (green check) - Has MCP config but no auth
- Tooltip: "Agent configuration found. You may need to log in to use it."
3. **"Not Found"** (orange warning) - Neither found
- Tooltip: "No credentials detected. The agent may still work if configured differently."
This gives users accurate, specific information about what was actually detected!
Fix i18n agent availability (vibe-kanban 9eab4ea1)
Run @scripts/check-i18n.sh until it passes. Make sure to check the script and set GITHUB_BASE_REF to main
Perfect! I've fixed all the i18n keys by adding the `settings.` prefix. The translations should now work correctly and display:
- "Login Detected" instead of "agents.availability.loginDetected"
- "Installation Found" instead of "agents.availability.installationFound"
- "Not Found" instead of "agents.availability.notFound"
All the keys now match the pattern used throughout the codebase where `useTranslation('settings')` requires the full path starting with `settings.`.
Cleanup script changes for task attempt 60b68945-b59c-46e1-895d-540a910f2904
* Perfect! All checks pass. The refactoring to use `AvailabilityInfo` directly is complete:
**Removed redundancy:**
- Deleted `CheckAgentAvailabilityResponse` struct entirely
- API now returns `AvailabilityInfo` directly
- Removed `available` field (was computed/redundant)
**Backend now returns:**
```rust
AvailabilityInfo {
mcp_config_found: bool,
auth_last_edited: Option<i64>,
}
```
**Frontend interprets it as:**
- `auth_last_edited !== null` → **"Login Detected"** (has recent auth credentials)
- `auth_last_edited === null && mcp_config_found === true` → **"Installation Found"** (has MCP config only)
- `auth_last_edited === null && mcp_config_found === false` → **"Not Found"** (nothing detected)
**Benefits:**
- Single source of truth for availability data structure
- No duplication between `AvailabilityInfo` and response type
- Frontend gets raw signals and can interpret them appropriately
- Cleaner, simpler codebase
The implementation now correctly distinguishes between login credentials and installation/configuration files!
Cleanup script changes for task attempt 60b68945-b59c-46e1-895d-540a910f2904
Perfect! All checks pass. The `check_availability` method has been successfully removed:
**Removed `check_availability()` method:**
- Updated `profile.rs:419` to use `get_availability_info().is_available()` instead
- Removed `check_availability()` from trait definition
- Updated Cursor's custom override to use `get_availability_info()` instead
- Removed unused import from cursor.rs
**Now there's only one method:**
- `get_availability_info()` returns the raw signals
- Callers use `.is_available()` if they need a boolean
Cleaner API surface with no redundant methods!
* Perfect! The simplification is complete. Let me show you what was removed:
## Simplified Implementation
**Removed from OnboardingDialog (~90 lines):**
- ❌ `agentAvailabilityMap` state (9 lines)
- ❌ `useEffect` with parallel API calls (66 lines)
- ❌ Complex 3-tier sorting logic (41 lines)
- ❌ Auto-selection based on recent auth
- ❌ Unused imports (`useEffect`, `configApi`)
**What remains:**
- ✅ Simple alphabetical sort: `.sort()`
- ✅ Backend auto-selects first available agent (via `get_recommended_executor_profile`)
- ✅ Live indicator when user selects agent (via `useAgentAvailability` hook)
- ✅ Indicator in GeneralSettings
**The feature now:**
1. Dropdown shows agents alphabetically (AMP, CLAUDE_CODE, CODEX, ...)
2. Pre-selected agent is whatever backend chose (first available from HashMap)
3. User selects an agent → indicator shows "Login Detected" / "Installation Found" / "Not Found"
4. Same in GeneralSettings when changing default agent
**Net reduction: ~90 lines removed from OnboardingDialog, much simpler!**
* Perfect! All checks pass. The implementation is complete:
## Summary of Final Simplified Architecture
**Backend (profile.rs):**
- `get_recommended_executor_profile()` now checks ALL available agents
- Sorts by most recent `auth_last_edited` timestamp
- Returns agent with freshest credentials (e.g., logged in yesterday)
- Fallback to agents with MCP config if no auth found
**Frontend (OnboardingDialog):**
- ✅ Shows alphabetically sorted dropdown
- ✅ Pre-selects whatever backend recommended (agent with most recent credentials)
- ✅ Shows live indicator when selected: "Login Detected" / "Installation Found" / "Not Found"
- ❌ No parallel API calls on mount (removed ~90 lines)
- ❌ No complex sorting logic (removed)
- ❌ No auto-selection in frontend (backend handles it)
**Net result:**
- Smart auto-selection happens in backend (right place)
- Frontend is much simpler (just shows indicator on selection)
- Achieves the same UX goal (picks most recently used agent)
- ~90 lines removed from OnboardingDialog
Much cleaner architecture!
* Perfect! All checks pass. The 60-day cutoff has been removed:
## Summary of Changes
**Simplified `is_available()` logic** (`crates/executors/src/executors/mod.rs:175-177`):
```rust
pub fn is_available(&self) -> bool {
self.mcp_config_found || self.auth_last_edited.is_some()
}
```
**Before:**
- Auth file must exist AND be less than 60 days old
- Rejected old auth files even if tokens still valid
**After:**
- Auth file exists → available ✅
- MCP config exists → available ✅
- Simple and straightforward
**Benefits:**
- ✅ Removed 13 lines of arbitrary staleness checking
- ✅ No false negatives (61-day-old credentials shown as "Login Detected")
- ✅ Backend sorting still prioritizes recent auth (for smart auto-selection)
- ✅ If tokens are actually expired, user finds out when they try to use the agent (which is fine)
The implementation is now much cleaner - file existence is all we check, and recency is only used for **prioritization** in the backend auto-selection, not for determining availability.
* Review agent availability (vibe-kanban a75cb3d4)
Review the last 4 commits
* Move agent install info out of tooltip (vibe-kanban 481d6ed6)
When agents are not installed users can get more info by hovering and looking at the tooltip, move this out of there and always display it.
* Refactor auth file path (vibe-kanban e691ad16)
In crates/executors/src/executors/mod.rs add a method similar to mcp config for auth config that executors can check in the availability check
* Refactor auth file path (vibe-kanban e691ad16)
In crates/executors/src/executors/mod.rs add a method similar to mcp config for auth config that executors can check in the availability check
* Refactor availability info (vibe-kanban 8e24bec4)
Check crates/executors/src/executors/mod.rs and refactor availability info. The frontend currently may not use the timestamp and defines its own state type. we should return the state from the backend instead
* Perfect! Now all locale files are updated with the user-centric language:
## Summary of i18n Updates
**Changed labels to user-centric language:**
- "Login Detected" → **"Recent Usage Detected"**
- "Installation Found" → **"Previous Usage Detected"**
**Updated notFound tooltip:**
- "No credentials detected. The agent may still work if configured differently."
- → **"No previous usage detected. Agent may require installation and/or login."**
**Kept technical tooltips for detected states:**
- "Recent authentication credentials found for this agent" (still accurate)
- "Agent configuration found. You may need to log in to use it." (still helpful)
This is much clearer for users - focuses on what they care about (did I use this before?) rather than technical implementation details (credentials, MCP configs, auth files).
* lint
* Perfect! All the imports are cleaned up. Now all executor implementations use the shorter `AvailabilityInfo::LoginDetected` instead of the verbose `crate::executors::AvailabilityInfo::LoginDetected`.
Cleanup script changes for task attempt 60b68945-b59c-46e1-895d-540a910f2904
Perfect! All the imports are now cleaned up. All executor implementations use the shorter `AvailabilityInfo::LoginDetected` instead of the verbose `crate::executors::AvailabilityInfo::LoginDetected`.
Cleanup script changes for task attempt 60b68945-b59c-46e1-895d-540a910f2904
|
||
|
|
c2a10aaf72 |
fix: Copilot session resume (#1355)
GitHub Copilot CLI changed the the name of session files, which broke session resume. |
||
|
|
37f8f3c74f |
Gemini fixes (#1349)
* Fix gemini yolo * limit diff unchanged lines context to 3 lines * fix shell command parsing * remove the GeminiModel struct |
||
|
|
f691fbd9cb | chore: bump version to 0.0.122 | ||
|
|
40252b6ea7 |
Bump codex (#1345)
* Done. Bumped codex from 0.58.0 to 0.60.1 in [codex.rs](file:///private/var/folders/m1/9q_ct1913z10v6wbnv54j25r0000gn/T/vibe-kanban/worktrees/8d60-bump-codex/crates/executors/src/executors/codex.rs#L171). * Added `gpt-5.1-codex-max` model variant as a new `MAX` profile in [default_profiles.json](file:///private/var/folders/m1/9q_ct1913z10v6wbnv54j25r0000gn/T/vibe-kanban/worktrees/8d60-bump-codex/crates/executors/default_profiles.json#L62-L68). |
||
|
|
83602590e9 |
Droid agent (#1318)
* droid research (vibe-kanban 054135e9)
<droid-docs>
# Overview
> Non-interactive execution mode for CI/CD pipelines and automation scripts.
# Droid Exec (Headless CLI)
Droid Exec is Factory's headless execution mode designed for automation workflows. Unlike the interactive CLI, `droid exec` runs as a one-shot command that completes a task and exits, making it ideal for CI/CD pipelines, shell scripts, and batch processing.
## Summary and goals
Droid Exec is a one-shot task runner designed to:
* Produce readable logs, and structured artifacts when requested
* Enforce opt-in for mutations/command execution (secure-by-default)
* Fail fast on permission violations with clear errors
* Support simple composition for batch and parallel work
<CardGroup cols={2}>
<Card title="Non-Interactive" icon="terminal">
Single run execution that writes to stdout/stderr for CI/CD integration
</Card>
<Card title="Secure by Default" icon="lock">
Read-only by default with explicit opt-in for mutations via autonomy levels
</Card>
<Card title="Composable" icon="puzzle">
Designed for shell scripting, parallel execution, and pipeline integration
</Card>
<Card title="Clean Output" icon="file-export">
Structured output formats and artifacts for automated processing
</Card>
</CardGroup>
## Execution model
* Non-interactive single run that writes to stdout/stderr.
* Default is spec-mode: the agent is only allowed to execute read-only operations.
* Add `--auto` to enable edits and commands; risk tiers gate what can run.
CLI help (excerpt):
```
Usage: droid exec [options] [prompt]
Execute a single command (non-interactive mode)
Arguments:
prompt The prompt to execute
Options:
-o, --output-format <format> Output format (default: "text")
-f, --file <path> Read prompt from file
--auto <level> Autonomy level: low|medium|high
--skip-permissions-unsafe Skip ALL permission checks (unsafe)
-s, --session-id <id> Existing session to continue (requires a prompt)
-m, --model <id> Model ID to use
-r, --reasoning-effort <level> Reasoning effort: off|low|medium|high
--cwd <path> Working directory path
-h, --help display help for command
```
Supported models (examples):
* gpt-5-codex (default)
* gpt-5-2025-08-07
* claude-sonnet-4-20250514
* claude-opus-4-1-20250805
## Installation
<Steps>
<Step title="Install Droid CLI">
<CodeGroup>
```bash macOS/Linux theme={null}
curl -fsSL https://app.factory.ai/cli | sh
```
```powershell Windows theme={null}
irm https://app.factory.ai/cli/windows | iex
```
</CodeGroup>
</Step>
<Step title="Get Factory API Key">
Generate your API key from the [Factory Settings Page](https://app.factory.ai/settings/api-keys)
</Step>
<Step title="Set Environment Variable">
Export your API key as an environment variable:
```bash theme={null}
export FACTORY_API_KEY=fk-...
```
</Step>
</Steps>
## Quickstart
* Direct prompt:
* `droid exec "analyze code quality"`
* `droid exec "fix the bug in src/main.js" --auto low`
* From file:
* `droid exec -f prompt.md`
* Pipe:
* `echo "summarize repo structure" | droid exec`
* Session continuation:
* `droid exec --session-id <session-id> "continue with next steps"`
## Autonomy Levels
Droid exec uses a tiered autonomy system to control what operations the agent can perform. By default, it runs in read-only mode, requiring explicit flags to enable modifications.
### DEFAULT (no flags) - Read-only Mode
The safest mode for reviewing planned changes without execution:
* ✅ Reading files or logs: cat, less, head, tail, systemctl status
* ✅ Display commands: echo, pwd
* ✅ Information gathering: whoami, date, uname, ps, top
* ✅ Git read operations: git status, git log, git diff
* ✅ Directory listing: ls, find (without -delete or -exec)
* ❌ No modifications to files or system
* **Use case:** Safe for reviewing what changes would be made
```bash theme={null}
# Analyze and plan refactoring without making changes
droid exec "Analyze the authentication system and create a detailed plan for migrating from session-based auth to OAuth2. List all files that would need changes and describe the modifications required."
# Review code quality and generate report
droid exec "Review the codebase for security vulnerabilities, performance issues, and code smells. Generate a prioritized list of improvements needed."
# Understand project structure
droid exec "Analyze the project architecture and create a dependency graph showing how modules interact with each other."
```
### `--auto low` - Low-risk Operations
Enables basic file operations while blocking system changes:
* ✅ File creation/editing in project directories
* ❌ No system modifications or package installations
* **Use case:** Documentation updates, code formatting, adding comments
```bash theme={null}
# Safe file operations
droid exec --auto low "add JSDoc comments to all functions"
droid exec --auto low "fix typos in README.md"
```
### `--auto medium` - Development Operations
Operations that may have significant side effects, but these side effects are typically harmless and straightforward to recover from.
Adds common development tasks to low-risk operations:
* Installing packages from trusted sources: npm install, pip install (without sudo)
* Network requests to trusted endpoints: curl, wget to known APIs
* Git operations that modify local repositories: git commit, git checkout, git pull (but not git push)
* Building code with tools like make, npm run build, mvn compile
* ❌ No git push, sudo commands, or production changes
* **Use case:** Local development, testing, dependency management
```bash theme={null}
# Development tasks
droid exec --auto medium "install deps, run tests, fix issues"
droid exec --auto medium "update packages and resolve conflicts"
```
### `--auto high` - Production Operations
Commands that may have security implications such as data transfers between untrusted sources or execution of unknown code, or major side effects such as irreversible data loss or modifications of production systems/deployments.
* Running arbitrary/untrusted code: curl | bash, eval, executing downloaded scripts
* Exposing ports or modifying firewall rules that could allow external access
* Git push operations that modify remote repositories: git push, git push --force
* Irreversible actions to production deployments, database migrations, or other sensitive operations
* Commands that access or modify sensitive information like passwords or keys
* ❌ Still blocks: sudo rm -rf /, system-wide changes
* **Use case:** CI/CD pipelines, automated deployments
```bash theme={null}
# Full workflow automation
droid exec --auto high "fix bug, test, commit, and push to main"
droid exec --auto high "deploy to staging after running tests"
```
### `--skip-permissions-unsafe` - Bypass All Checks
<Warning>
DANGEROUS: This mode allows ALL operations without confirmation. Only use in completely isolated environments like Docker containers or throwaway VMs.
</Warning>
* ⚠️ Allows ALL operations without confirmation
* ⚠️ Can execute irreversible operations
* Cannot be combined with --auto flags
* **Use case:** Isolated environments
```bash theme={null}
# In a disposable Docker container for CI testing
docker run --rm -v $(pwd):/workspace alpine:latest sh -c "
apk add curl bash &&
curl -fsSL https://app.factory.ai/cli | sh &&
droid exec --skip-permissions-unsafe 'Install all system dependencies, modify system configs, run integration tests that require root access, and clean up test databases'
"
# In ephemeral GitHub Actions runner for rapid iteration
# where the runner is destroyed after each job
droid exec --skip-permissions-unsafe "Modify /etc/hosts for test domains, install custom kernel modules, run privileged container tests, and reset network interfaces"
# In a temporary VM for security testing
droid exec --skip-permissions-unsafe "Run penetration testing tools, modify firewall rules, test privilege escalation scenarios, and generate security audit reports"
```
### Fail-fast Behavior
If a requested action exceeds the current autonomy level, droid exec will:
1. Stop immediately with a clear error message
2. Return a non-zero exit code
3. Not perform any partial changes
This ensures predictable behavior in automation scripts and CI/CD pipelines.
## Output formats and artifacts
Droid exec supports three output formats for different use cases:
### text (default)
Human-readable output for direct consumption or logs:
```bash theme={null}
$ droid exec --auto low "create a python file that prints 'hello world'"
Perfect! I've created a Python file named `hello_world.py` in your home directory that prints 'hello world' when executed.
```
### json
Structured JSON output for parsing in scripts and automation:
```bash theme={null}
$ droid exec "summarize this repository" --output-format json
{
"type": "result",
"subtype": "success",
"is_error": false,
"duration_ms": 5657,
"num_turns": 1,
"result": "This is a Factory documentation repository containing guides for CLI tools, web platform features, and onboarding procedures...",
"session_id": "8af22e0a-d222-42c6-8c7e-7a059e391b0b"
}
```
Use JSON format when you need to:
* Parse the result in a script
* Check success/failure programmatically
* Extract session IDs for continuation
* Process results in a pipeline
### debug
Streaming messages showing the agent's execution in real-time:
```bash theme={null}
$ droid exec "run ls command" --output-format debug
{"type":"message","role":"user","text":"run ls command"}
{"type":"message","role":"assistant","text":"I'll run the ls command to list the contents..."}
{"type":"tool_call","toolName":"Execute","parameters":{"command":"ls -la"}}
{"type":"tool_result","value":"total 16\ndrwxr-xr-x@ 8 user staff..."}
{"type":"message","role":"assistant","text":"The ls command has been executed successfully..."}
```
Debug format is useful for:
* Monitoring agent behavior
* Troubleshooting execution issues
* Understanding tool usage patterns
* Real-time progress tracking
For automated pipelines, you can also direct the agent to write specific artifacts:
```bash theme={null}
droid exec --auto low "Analyze dependencies and write to deps.json"
droid exec --auto low "Generate metrics report in CSV format to metrics.csv"
```
## Working directory
* Use `--cwd` to scope execution:
```
droid exec --cwd /home/runner/work/repo "Map internal packages and dump graphviz DOT to deps.dot"
```
## Models and reasoning effort
* Choose a model with `-m` and adjust reasoning with `-r`:
```
droid exec -m claude-sonnet-4-20250514 -r medium -f plan.md
```
## Batch and parallel patterns
Shell loops (bounded concurrency):
```bash theme={null}
# Process files in parallel (GNU xargs -P)
find src -name "*.ts" -print0 | xargs -0 -P 4 -I {} \
droid exec --auto low "Refactor file: {} to use modern TS patterns"
```
Background job parallelization:
```bash theme={null}
# Process multiple directories in parallel with job control
for path in packages/ui packages/models apps/factory-app; do
(
cd "$path" &&
droid exec --auto low "Run targeted analysis and write report.md"
) &
done
wait # Wait for all background jobs to complete
```
Chunked inputs:
```bash theme={null}
# Split large file lists into manageable chunks
git diff --name-only origin/main...HEAD | split -l 50 - /tmp/files_
for f in /tmp/files_*; do
list=$(tr '\n' ' ' < "$f")
droid exec --auto low "Review changed files: $list and write to review.json"
done
rm /tmp/files_* # Clean up temporary files
```
Workflow Automation (CI/CD):
```yaml theme={null}
# Dead code detection and cleanup suggestions
name: Code Cleanup Analysis
on:
schedule:
- cron: '0 1 * * 0' # Weekly on Sundays
workflow_dispatch:
jobs:
cleanup-analysis:
strategy:
matrix:
module: ['src/components', 'src/services', 'src/utils', 'src/hooks']
steps:
- uses: actions/checkout@v4
- run: droid exec --cwd "${{ matrix.module }}" --auto low "Identify unused exports, dead code, and deprecated patterns. Generate cleanup recommendations in cleanup-report.md"
```
## Unique usage examples
License header enforcer:
```bash theme={null}
git ls-files "*.ts" | xargs -I {} \
droid exec --auto low "Ensure {} begins with the Apache-2.0 header; add it if missing"
```
API contract drift check (read-only):
```bash theme={null}
droid exec "Compare openapi.yaml operations to our TypeScript client methods and write drift.md with any mismatches"
```
Security sweep:
```bash theme={null}
droid exec --auto low "Run a quick audit for sync child_process usage and propose fixes; write findings to sec-audit.csv"
```
## Exit behavior
* 0: success
* Non-zero: failure (permission violation, tool error, unmet objective). Treat non-zero as failed in CI.
## Best practices
* Favor `--auto low`; keep mutations minimal and commit/push in scripted steps.
* Avoid `--skip-permissions-unsafe` unless fully sandboxed.
* Ask the agent to emit artifacts your pipeline can verify.
* Use `--cwd` to constrain scope in monorepos.
</droid-docs>
Use the oracle to research how we support custom executors.
AMP and Claude Code would likely be good references here as I believe that they both operate via JSON.
Save your findings in a single markdown file.
* begin droid
* add plan
* droid implementation (vibe-kanban 90e6c8f6)
Read tasks/droid-agent/plan.md and execute the plan.
* document droid (vibe-kanban 0a7f8590)
we have introduced a new coding agent
Installation instructions are at https://factory.ai/product/cli
We expect that users have the `droid` cli installed and that they have logged in.
docs/supported-coding-agents.mdx
There may also be other docs or references.
* red gh action (vibe-kanban f0c8b6c4)
Run cargo fmt --all -- --check
cargo fmt --all -- --check
npm run generate-types:check
cargo test --workspace
cargo clippy --all --all-targets -- -D warnings
the checks step is failing, can you see what's up with the rust codebase and resolve it?
* droid | settings bug (vibe-kanban 7deec8df)
We have a new coding agent called Droid and it has a variety of different settings including the autonomy level and we default this to medium and users can update this by going to settings and then using the drop down to change it and then hitting the save button. And this works, however, when users return back to settings the displayed autonomy level is reset to medium rather than the correct level. So can you investigate why this is happening and plan how we can improve it, how we can verify it, do we need to introduce some logging, other things to consider. Write up your plan in a new markdown file.
* glob
* tool call parsing & display (vibe-kanban e3f65a74)
droid.rs has `fn map_tool_to_action`
The problem is that we're doing a poor job at displaying these tool calls e.g. glob. In `claude.rs`, we use `ClaudeToolData`, a struct that matches the real JSON data. Once we do that, we have a type safe way to map tool calls to the `ActionType` struct.
You can run `droid exec --output-format=stream-json --auto medium "YOUR MESSAGE MERE"` in a temporary directory to instruct the agent to generate custom outputs in case you need more sample data.
I just added glob.jsonl under droid-json, there are other json files in there too.
I recommend using sub agents as some of these files (e.g. claude.rs) are large.
cursor.rs might also be a useful reference.
You're done once we properly handle these tools.
* show droid model (vibe-kanban 8fdbc630)
The first JSON object emitted from the droid executor is a system message with a `model` field. We should capture and display this.
I believe that we're already doing something similar with Codex.
Here's a sample system message:
{"type":"system","subtype":"init","cwd":"/Users/britannio/projects/vibe-kanban","session_id":"59a75629-c0c4-451f-a3c7-8e9eab05484a","tools":["Read","LS","Execute","Edit","MultiEdit","ApplyPatch","Grep","Glob","Create","ExitSpecMode","WebSearch","TodoWrite","FetchUrl","slack_post_message"],"model":"gpt-5-codex"}
* reliable apply patch display (vibe-kanban 3710fb65)
The crates/executors/src/executors/droid.rs ApplyPatch tool call contains an `input` string which isn't very helpful, but the tool call result is a JSON object with a `value` object with the fields success, content, diff, and file_path.
Here's a parsed example of `value`:
{
"success": true,
"content": "def bubble_sort(arr):\n \"\"\"\n Bubble Sort Algorithm\n Time Complexity: O(n^2)\n Space Complexity: O(1)\n\n Repeatedly steps through the list, compares adjacent elements and swaps them\n if they are in the wrong order.\n \"\"\"\n n = len(arr)\n arr = arr.copy() # Create a copy to avoid modifying the original\n\n for i in range(n):\n # Flag to optimize by stopping if no swaps occur\n swapped = False\n\n for j in range(0, n - i - 1):\n if arr[j] > arr[j + 1]:\n arr[j], arr[j + 1] = arr[j + 1], arr[j]\n swapped = True\n\n # If no swaps occurred, array is already sorted\n if not swapped:\n break\n\n return arr\n\n\ndef insertion_sort(arr):\n \"\"\"\n Insertion Sort Algorithm\n Time Complexity: O(n^2)\n Space Complexity: O(1)\n\n Builds the sorted portion of the array one element at a time by inserting\n each element into its correct position.\n \"\"\"\n arr = arr.copy() # Create a copy to avoid modifying the original\n\n for i in range(1, len(arr)):\n key = arr[i]\n j = i - 1\n\n while j >= 0 and arr[j] > key:\n arr[j + 1] = arr[j]\n j -= 1\n\n arr[j + 1] = key\n\n return arr\n\n\nif __name__ == \"__main__\":\n # Example usage\n test_array = [64, 34, 25, 12, 22, 11, 90]\n\n print(\"Original array:\", test_array)\n print(\"\\nBubble Sort result:\", bubble_sort(test_array))\n print(\"Insertion Sort result:\", insertion_sort(test_array))\n\n # Test with different arrays\n print(\"\\n--- Additional Tests ---\")\n test_cases = {\n \"Reverse sorted\": [5, 4, 3, 2, 1],\n \"Empty array\": [],\n \"Already sorted\": [1, 2, 3, 4, 5],\n }\n\n for description, case in test_cases.items():\n print(f\"{description} (Bubble):\", bubble_sort(case))\n print(f\"{description} (Insertion):\", insertion_sort(case))\n",
"diff": "--- previous\t\n+++ current\t\n@@ -26,14 +26,46 @@\n return arr\n \n \n+def insertion_sort(arr):\n+ \"\"\"\n+ Insertion Sort Algorithm\n+ Time Complexity: O(n^2)\n+ Space Complexity: O(1)\n+\n+ Builds the sorted portion of the array one element at a time by inserting\n+ each element into its correct position.\n+ \"\"\"\n+ arr = arr.copy() # Create a copy to avoid modifying the original\n+\n+ for i in range(1, len(arr)):\n+ key = arr[i]\n+ j = i - 1\n+\n+ while j >= 0 and arr[j] > key:\n+ arr[j + 1] = arr[j]\n+ j -= 1\n+\n+ arr[j + 1] = key\n+\n+ return arr\n+\n+\n if __name__ == \"__main__\":\n # Example usage\n test_array = [64, 34, 25, 12, 22, 11, 90]\n \n print(\"Original array:\", test_array)\n print(\"\\nBubble Sort result:\", bubble_sort(test_array))\n+ print(\"Insertion Sort result:\", insertion_sort(test_array))\n \n # Test with different arrays\n print(\"\\n--- Additional Tests ---\")\n- print(\"Reverse sorted:\", bubble_sort([5, 4, 3, 2, 1]))\n- print(\"Empty array:\", bubble_sort([]))\n+ test_cases = {\n+ \"Reverse sorted\": [5, 4, 3, 2, 1],\n+ \"Empty array\": [],\n+ \"Already sorted\": [1, 2, 3, 4, 5],\n+ }\n+\n+ for description, case in test_cases.items():\n+ print(f\"{description} (Bubble):\", bubble_sort(case))\n+ print(f\"{description} (Insertion):\", insertion_sort(case))",
"file_path": "/Users/britannio/projects/droid-simple/sorting_algorithms.py"
}
This formatting should be deterministic and thus we can use it to show more informative tool call data.
The first thing to understand is if this will naturally fit with the current architecture, as we only reliably know how the file has changed (and what the target file was) after receiving the tool call result.
* droid failed tool call handling (vibe-kanban bd7feddb)
crates/executors/src/executors/droid.rs
droid-json/insufficient-perms.jsonl
the insufficient-perms file contains the JSON output log of a run where it runs a command to create a file but the tool call fails due to a permission error.
I'd expect that the failed tool result would be correlated with the tool call and thus i'd see an ARGS block and a RESULTS block within the tool call on the front-end.
Instead, I see the tool call only with the ARGS block, then I see a separate UI element with the JSON tool result as if it failed to be correlated.
Firstly, I want to follow TDD by creating a failing test that confirms this behaviour. It might be hard though because we haven't designed the code in droid.rs with testability in mind.
Lets first analyse the code to consider if it's already testable or if we need to do any refactoring & introduce harnesses etc.
My perspective of the coding agent is that we send it a command, and it streams JSON objects one by one so some form of reducer pattern seems natural (previous list of json objects + previous state + new json object => new state). Either 'new state' or 'new delta'.
When we resume a session, it will emit a system message object, then a message object with role user (repeating what we sent it), then the new actions that it takes.
* droid default (vibe-kanban 2f8a19cc)
the default autonomy level is currently medium. Lets change it to the highest (unsafe)
* droid globbing rendering (vibe-kanban 76d372ea)
See droid-json/glob.jsonl
Notice the `patterns` field. Unfortunately, we seems to not be using this data as glob tool calls are being rendered exclusively via a file name of some sort rather than `Globbing README.md, readme.md,docs/**,*.md`
Use the oracle to investigate this.
* droid todo list text (vibe-kanban b1bdeffc)
Use the text 'TODO list updated' for the droid agent when it makes a change to the todo list.
* droid workspace path (vibe-kanban 0486b74a)
See how claude.rs uses worktree_path (from normalize_logs).
We should be doing the same for the droid executor so that the tool calls we generate have relative paths.
* mcp settings (vibe-kanban 2031d8f4)
Quick fix: Filter that agent from the dropdown in the frontend.
// In McpSettings.tsx, line 282-289
<SelectContent>
{profiles &&
Object.entries(profiles)
.filter(([key]) => key !== 'DROID') // or whatever the agent name is
.sort((a, b) => a[0].localeCompare(b[0]))
.map(([profileKey]) => (
<SelectItem key={profileKey} value={profileKey}>
{profileKey}
</SelectItem>
))}
</SelectContent>
we need to temporarily hide droid as it doesn't support mcp yet.
* clean up (vibe-kanban 6b1a8e2e)
remove all references to 'britannio' from the droid module.
* delete droid json
* droid agent code review (vibe-kanban 6820ffd1)
We added Droid to crates/services/src/services/config/versions/v1.rs but presumably we should've used the latest reasonable version. See what we used for Copilot.
Delete docs/adr-droid-architecture.md
Delete docs/droid-improvements-summary.md
docs/supported-coding-agents.mdx the default was medium, it's now skip-permissions-unsafe
Delete the tasks/ folder
* remove unnecessary v1 change
* updated droid.json schema
* tweak command
* droid model suggestions (vibe-kanban 120f87d2)
crates/executors/src/executors/droid/types.rs
Valid model IDs are:
gpt-5-codex OpenAI GPT-5-Codex (Auto)
claude-sonnet-4-5-20250929 Claude Sonnet 4.5
gpt-5-2025-08-07 OpenAI GPT-5
claude-opus-4-1-20250805 Claude Opus 4.1
claude-haiku-4-5-20251001 Claude Haiku 4.5
glm-4.6 Droid Core (GLM-4.6)
We currently mention gpt-5-codex, claude-sonnet-4
* remove dead code
* droid automated testing (vibe-kanban f836b4a4)
lets start brainstorming this, starting with tests in crates/executors/src/executors/droid/types.rs to ensure that we correctly generate a command
* create exec_command_with_prompt
* Add logging to error paths in action_mapper.rs (vibe-kanban 76cc5d71)
Add tracing logging (warn/error) to error paths in `crates/executors/src/executors/droid/action_mapper.rs` following existing logging patterns in the codebase.
Key locations:
- Line 32-35: DroidToolData parsing failure (currently silent)
- Any other error paths that swallow errors
Use `tracing::warn!` with structured fields for context (tool_name, error details, etc.)
* droid automated testing (DroidJSON -> NormalizedEntry) (vibe-kanban cf325d24)
We have example agent from /Users/britannio/Downloads/droid-json
Read crates/executors/src/executors/droid/events.rs
Use the oracle to plan tests that we could introduce.
* preserve timestamp
* droid reasoning effort (vibe-kanban 47dae2db)
in settings, we're showing a dropdown for the droid autonomy level. We should be doing the same for the reasoning level. It should default to being empty if possible.
* droid path (vibe-kanban d8370535)
Droid file edits (presumably ApplyPatch?) aren't using relative paths. E.g. i'm seeing `/private/var/folders/5q/5vgq75y92dz0k7n62z93299r0000gn/T/vibe-kanban-dev/worktrees/11dc-setup/next.config.mjs`
* fix warning
* fix warning
* whitespace update
* DomainEvent -> LogEvent
* remove msg store stream -> line converter
* normalise the diff generated when the droid ApplyPatch tool call is
parsed
* refactor process_event to mutate a reference to ProcessorState
* remove EntryIndexProvider abstraction
* remove dead code
* remove JSON indirection when invoking extract_path_from_patch
* converting DroidJson -> LogEvent produces Option instead of Vec
DroidJson mapping tests removed in favour of snapshot testing delete
emit_patches (now redundant) update match syntax in
compute_updated_action_type make process_event a member of
ProcessorState
* simplify droid build_command_builder
* simplify droid types tests
* remove droid type tests
* rename events.rs -> log_event_converter.rs
rename patch_emitter -> patch_converter
remove ParsedLine indirection from processor.rs
handle Edit, MultiEdit, and Create tool calls (only used by some models like claude)
move action mapper logic to log_event_converter
introduce a claude snapshot
update snapshots
* add error log for failed parsing of DroidJson
* update snapshots
* Fix clippy warnings in droid executor
- Change &String to &str in extract_path_from_patch
- Rename to_patch to process_event for correct self convention
Amp-Thread-ID: https://ampcode.com/threads/T-81d4f5ac-6d3a-4da5-9799-de724f3df1e3
Co-authored-by: Amp <amp@ampcode.com>
* update cargo lock
* droid tool call result parsing (vibe-kanban 514d27de)
the droid executor has a regression where the `droid exec` command is no longer producing an `id` field for tool_result messages. Fortunately, in most cases, it's safe to stick to FIFO behaviour whereby if we get a tool result, we can match it with the earliest tool call. This won't always work but it's a reasonable solution for the next few days while the droid team fixes their executor.
Start by using the oracle to trace and understand the codepaths involved, and to make a plan. We likely need to update the DroidJson struct so that the tool call result id becomes optional.
To test this, we can take an existing snapshot test and create a variant of it without ids in the tool call results, and see if we still produce equivalent log events.
* refactor: collapse nested if statements in log_event_converter
Amp-Thread-ID: https://ampcode.com/threads/T-b9ad8aac-0fd5-44c5-b2f8-317d79b623a6
Co-authored-by: Amp <amp@ampcode.com>
* format
* Cleanup droid executor implementation
* Implement session forking
* linter
---------
Co-authored-by: Britannio Jarrett <britanniojarrett@gmail.com>
Co-authored-by: Test User <test@example.com>
Co-authored-by: Amp <amp@ampcode.com>
|
||
|
|
a3c134b4a6 | Done. Version bumped to 0.0.1763625676-g928988. (#1344) |