Deployments (#414)

* init deployment

* refactor state

* pre executor app state refactor

* deployment in app state

* clone

* fix executors

* fix dependencies

* command runner via app_state

* clippy

* remove dependency on ENVIRONMENT from command_runner

* remove dependency on ENVIRONMENT from command_runner

* build fix

* clippy

* fmt

* featues

* vscode lints for cloud

* change streaming to SSE (#338)

Remove debug logging

Cleanup streaming logic

feat: add helper function for creating SSE stream responses for stdout/stderr

* update vscode guidance

* move start

* Fix executors

* Move command executor to separate file

* Fix imports for executors

* Partial fix test_remote

* Fix

* fmt

* Clippy

* Add back GitHub cloud only routes

* cleanup and shared types

* Prepare for separate cloud crate

* Init backend-common workspace

* Update

* WIP

* WIP

* WIP

* WIP

* WIP

* WIP

* Projects (and sqlx)

* Tasks

* WIP

* Amp

* Backend executor structs

* Task attempts outline

* Move to crates folder

* Cleanup frontend dist

* Split out executors into separate crate

* Config and sentry

* Create deployment method helper

* Router

* Config endpoints

* Projects, analytics

* Update analytics paths when keys not provided

* Tasks, task context

* Middleware, outline task attempts

* Delete backend common

* WIP container

* WIP container

* Migrate worktree_path to container_ref (generic)

* WIP container service create

* Launch container

* Fix create task

* Create worktree

* Move logic into container

* Execution outline

* Executor selection

* Use enum_dispatch to route spawn tree

* Update route errors

* Implement child calling

* Move running executions to container

* Add streaming with history

* Drop cloud WIP

* Logs

* Logs

* Refactor container logic to execution tracker

* Chunk based streaming and cleanup

* Alex/mirgate task templates (#350)

* Re-enable task templates; migrate routes; migrate args and return types

* Refactor task template routes; consolidate list functions into get_templates with query support

* Fix get_templates function

* Implement amp executor

* Gemini WIP

* Make streaming the event store reusable

* Rewrite mutex to rwlock

* Staging for normalised logs impl

* Store custom LogMsg instead of event as more flexible

* Cleanup

* WIP newline stream for amp (tested and working, needs store impl)

* refactor: move stranded `git2` logic out of `models` (#352)

* remove legacy command_executor; move git2 logic into GitService

* remove legacy cloud runner

* put back config get route

* remove dead logic

* WIP amp normalisation

* Normalized logs now save to save msg store as raw

* Refactor auth endpoints (#355)

* Re-enable auth;Change auth to use deployment

Add auth service

Move auth logic to service

Add auth router and service integration to deployment

Refactor auth service and routes to use octocrab

Refactor auth error handling and improve token validation responses

* rename auth_router to router for consistency

* refactor: rename auth_service to auth for consistency (#356)

* Refactor filesystem endpoints (#357)

* feat: implement filesystem service with directory listing and git repo detection

* refactor: update filesystem routes; sort repos by last modfied

* Gemini executor logs normalization

* feat: add sound file serving endpoint and implement sound file loading (#358)

* Gemini executor followup (#360)

* Sync logs to db (#359)

* Exit monitor

* Outline stream logs to DB

* Outline read from the message store

* Add execution_process_logs, store logs in DB

* Stream logs from DB

* Normalized logs from DB

* Remove eronious .sqlx cache

* Remove execution process stdout and stderr

* Update execution process record on completion

* Emit session event for amp

* Update session ID when event is emitted

* Split local/common spawn fn

* Create initial executor session

* Move normalized logs into executors

* Store executor action

* Refactor updated_at to use micro seconds

* Follow up executions (#363)

* Follow up request handler scaffold
Rename coding agent initial / follow up actions

* Follow ups

* Response for follow up

* Simplify execution actions for coding agents

* fix executor selection (#362)

* refactor: move logic out of `TaskAttempt` (#361)

* re-enable /diff /pr /rebase /merge /branch-status /open-editor /delete-file endpoints

* address review comments

* remove relic

* Claude Code (#365)

* Use ApiError rather than DeploymentError type in routes (#366)

* Fix fe routes (#367)

* /api/filesystem/list -> /api/filesystem/directory

* /api/projects/:project_id/tasks -> /api/tasks

* Remove with-branch

* /api/projects/:project_id/tasks/:task_id -> /api/tasks/:task_id

* Post tasks

* Update template routes

* Update BE for github poll endpoint, FE still needs updating

* WIP freeze old types

* File picker fix

* Project types

* Solve tsc warna

* Remove constants and FE cloud mode

* Setup for /api/info refactor

* WIP config refactor

* Remove custom mapping to coding agents

* Update settings to fix code editor

* Config fix (will need further changes once attempts types migrated)

* Tmp fix types

* Config auto deserialisation

* Alex/refactor background processes (#369)

* feat: add cleanup for orphaned executions at startup

* Fix worktree cleanup; re add worktree cleanup queries

* refactor worktree cleanup for orphaned and externally deleted worktrees

* Fix compile error

* refactor: container creation lifecycle (#368)

* Consolidate worktree logic in the WorktreeManager

* move auxiliary logic into worktree manager

* fix compile error

* Rename core crate to server

* Fix npm run dev

* Fix fe routes 2 (#371)

* Migrate config paths

* Update sounds, refactor lib.rs

* Project FE types

* Branch

* Cleanup sound constants

* Template types

* Cleanup file search and other unused types

* Handle errors

* wip: basic mcp config editing (#351)

* Re-add notification service, move assets to common dir (#373)

add config to containter, add notifications into exit monitor

Refctor notification service

Refactor notifications

* Stderr support (#372)

Refactor plain-text log processing and resuse it for gemini, stderr, and potentially other executors.

* Fix fe routes 3 (#378)

* Task attempts

* Task types

* Get single task attempt endpoint

* Task attempt response

* Branch status

* More task attempt endpoints

* Task attempt children

* Events WIP

* Stream events when task, task attempt and execution process change status

* Fixes

* Cleanup logs

* Alex/refactor pr monitor (#377)

* Refactor task status updates and add PR monitoring functionality

* Add PR monitoring service and integrate it into deployment flow

Refactor GitHub token retrieval in PR creation and monitoring services

Fix github pr regex

* Fix types

* refactor: dev server logic (#374)

* reimplement start dev server logic

* robust process group killing

* Fix fe routes 4 (#383)

* Add endpoint to get execution processes

* Update types for execution process

* Further execution process type cleanup

* Wipe existing logs display

* Further process related cleanup

* Update get task attempt endpoint

* Frozen type removal

* Diff types

* Display raw logs WIP

* fix: extract session id once per execution (#386)

* Fix fe routes 5 (#387)

* Display normalized logs

* Add execution-process info endpoint

* WIP load into virtualized

* Simplified unified logs

* Raw logs also use json patch now (simplifies FE keys)

* WIP

* Fix FE rendering

* Remove timestamps

* Fix conversation height

* Cleanup entry display

* Spacing

* Mark the boundaries between different execution processes in the logs

* Deduplicate entries

* Fix replace

* Fmt

* put back stop execution process endpoint (#384)

* Fix fe routes 6 (#391)

* WIP cleanup to remove related tasks and plans

* Refactor active tab

* Remove existing diff FE logic

* Rename tab

* WIP stream file events

* WIP track FS events

* Respect gitignore

* Debounced event

* Deduplicate events

* Refactor git diff

* WIP stream diffs

* Resolve issue with unstaged changes

* Diff filter by files

* Stream ongoing changes

* Remove entries when reset and json patch safe entry ids

* Update the diff tab

* Cleanup logs

* Cleanup

* Error enum

* Update create PR attempt URL

* Follow up and open in IDE

* Fix merge

* refactor: introduce `AgentProfiles` (#388)

* automatically schedule coding agent execution after setup script

* profiles implementation

* add next_action field to ExecutorAction type

* make start_next_action generic to action type

Remove ProfilesManager and DefaultCommandBuilder structs

* store executor_action_type in the DB

* update shared types

* rename structs

* fix compile error

* Refactor remaining task routes (#389)

* Implement deletion functionality for execution processes and task attempts, including recursive deletion of associated logs.

refactor: deletion process for task attempts and associated entities

feat: Refactor task and task attempt models to remove executor field

- Removed the `executor` field from the `task_attempt` model and related queries.
- Updated the `CreateTaskAndStart` struct to encapsulate task and attempt creation.
- Modified the task creation and starting logic to accommodate the new structure.
- Adjusted SQL queries and migration scripts to reflect the removal of the executor.
- Enhanced notification service to handle executor types dynamically.
- Updated TypeScript types to align with the changes in the Rust models.

refactor: remove CreateTaskAndStart type and update related code

Add TaskAttemptWithLatestProfile and alias in frontend

Fix silent failure of sqlx builder

Remove db migration

Fix rebase errors

* Remove unneeded delete logic; move common container logic to service

* Profiles fe (#398)

* Get things compiling

* Refactor the config

* WIP fix task attempt creation

* Further config fixes

* Sounds and executors in settings

* Fix sounds

* Display profile config

* Onboarding

* Remove hardcoded agents

* Move follow up attempt params to shared

* Remove further shared types

* Remove comment (#400)

* Codex (#380)

* only trigger error message when RunReason is SetupScript (#396)

* Opencode (#385)

* Restore Gemini followups (#392)

* fix task killing (#395)

* commit changes after successful execution (#403)

* Claude-code-router (#410)

* Amp tool use (#407)

* Config upgrades (#405)

* Versioned config

* Upgrade fixes

* Save config after migration

* Scoping

* Update Executor types

* Theme types fix

* Cleanup

* Change theme selector to an enum

* Rename config schema version field

* Diff improve (#412)

* Ensure container exists

* Safe handling when ExecutorAction isn't valid JSON in DB

* Reset data when endpoint changes

* refactor: conditional notification (#408)

* conditional notification

* fix next action run_reason

* remove redundant log

* Fix GitHub auth frontend (#404)

* fix frontend github auth

* Add GitHub error handling and update dependencies

- Introduced GitHubMagicErrorStrings enum for consistent error messaging related to GitHub authentication and permissions.
- Updated the GitHubService to include a check_token method for validating tokens.
- Refactored auth and task_attempts routes to utilize the new error handling.
- Added strum_macros dependency in Cargo.toml for enum display.

* Refactor GitHub error handling and API response structure to use CreateGitHubPRErrorData

* Refactor API response handling in CreatePRDialog and update attemptsApi to return structured results

* Refactor tasksApi.createAndStart to remove projectId parameter from API call

* use SCREAMING_SNAKE_CASE for consistency

* Refactor GitHub error handling to replace CreateGitHubPRErrorData with GitHubServiceError across the codebase

* Update crates/utils/src/response.rs

Co-authored-by: Gabriel Gordon-Hall <gabriel@bloop.ai>

* Fix compile error

* Fix types

---------

Co-authored-by: Gabriel Gordon-Hall <gabriel@bloop.ai>

* Fix: (#415)

- Config location
- Serve FE from BE in prod
- Create config when doesn't exist
- Tmp disable building the MCP

* Fix dev server route (#417)

* remove legacy logic and unused crates (#418)

* update CLAUDE.md for new project structure (#420)

* fix mcp settings page (#419)

* Fix cards not updating (vibe-kanban) (#416)

* Commit changes from coding agent for task attempt 774a2cae-a763-4117-af0e-1287a043c462

* Commit changes from coding agent for task attempt 774a2cae-a763-4117-af0e-1287a043c462

* Commit changes from coding agent for task attempt 774a2cae-a763-4117-af0e-1287a043c462

* feat: update task status management in container service

* refactor: simplify notification logic and finalize context checks in LocalContainerService

* Task attempt fe fixes (#422)

* Style tweaks

* Refactor

* Fix auto scroll

* Implement stop endpoint for all execution processed in a task attempt

* Weird race condition with amp

* Remove log

* Fix follow ups

* Re-add stop task attempt endpoint (#421)

* Re-add stop task attempt endpoint; remove legacy comments for implemented functionality

* Fix kill race condition; fix state change when dev server

* Ci fixes (#425)

* Eslint fix

* Remove #[ts(export)]

* Fix tests

* Clippy

* Prettier

* Fmt

* Version downgrade

* Fix API response

* Don't treat clippy warnings as errors

* Change crate name

* Update cargo location

* Update further refs

* Reset versions

* Bump versions

* Update binary names

* Branch fix

* Prettier

* Ensure finished event sends data (#434)

* use option_env! when reading analytics vars (#435)

* remove dead logic (#436)

* update crate version across workspace (#437)

* add all crates across the workspace

* chore: bump version to 0.0.56

---------

Co-authored-by: Alex Netsch <alex@bloop.ai>
Co-authored-by: Gabriel Gordon-Hall <gabriel@bloop.ai>
Co-authored-by: Solomon <abcpro11051@disroot.org>
Co-authored-by: Gabriel Gordon-Hall <ggordonhall@gmail.com>
Co-authored-by: GitHub Action <action@github.com>
This commit is contained in:
Louis Knight-Webb
2025-08-08 13:53:27 +01:00
committed by GitHub
parent 0f17063c87
commit 3ed134d7d5
331 changed files with 17971 additions and 24665 deletions

View File

@@ -95,7 +95,7 @@ jobs:
npm version $new_version --no-git-tag-version --allow-same-version
cd ..
cd backend && cargo set-version "$new_version"
cargo set-version --workspace "$new_version"
echo "New version: $new_version"
echo "new_version=$new_version" >> $GITHUB_OUTPUT
@@ -105,7 +105,8 @@ jobs:
run: |
git config --local user.email "action@github.com"
git config --local user.name "GitHub Action"
git add package.json package-lock.json npx-cli/package.json backend/Cargo.toml
git add package.json package-lock.json npx-cli/package.json
git add $(find . -name Cargo.toml)
git commit -m "chore: bump version to ${{ steps.version.outputs.new_version }}"
git tag -a ${{ steps.version.outputs.new_tag }} -m "Release ${{ steps.version.outputs.new_tag }}"
git push
@@ -221,7 +222,7 @@ jobs:
- name: Build backend for target
run: |
cargo build --release --target ${{ matrix.target }} -p vibe-kanban
cargo build --release --target ${{ matrix.target }} -p server
cargo build --release --target ${{ matrix.target }} --bin mcp_task_server
env:
CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER: ${{ matrix.target == 'aarch64-unknown-linux-gnu' && 'aarch64-linux-gnu-gcc' || '' }}
@@ -245,10 +246,10 @@ jobs:
run: |
mkdir -p dist
if [[ "${{ matrix.os }}" == "windows-latest-l" ]]; then
cp target/${{ matrix.target }}/release/vibe-kanban.exe dist/vibe-kanban-${{ matrix.name }}.exe
cp target/${{ matrix.target }}/release/server.exe dist/vibe-kanban-${{ matrix.name }}.exe
cp target/${{ matrix.target }}/release/mcp_task_server.exe dist/vibe-kanban-mcp-${{ matrix.name }}.exe
else
cp target/${{ matrix.target }}/release/vibe-kanban dist/vibe-kanban-${{ matrix.name }}
cp target/${{ matrix.target }}/release/server dist/vibe-kanban-${{ matrix.name }}
cp target/${{ matrix.target }}/release/mcp_task_server dist/vibe-kanban-mcp-${{ matrix.name }}
fi
@@ -268,7 +269,7 @@ jobs:
if: runner.os == 'macOS'
uses: indygreg/apple-code-sign-action@v1
with:
input_path: target/${{ matrix.target }}/release/vibe-kanban
input_path: target/${{ matrix.target }}/release/server
output_path: vibe-kanban
p12_file: certificate.p12
p12_password: ${{ secrets.APPLE_CERTIFICATE_PASSWORD }}

View File

@@ -60,4 +60,4 @@ jobs:
cargo fmt --all -- --check
npm run generate-types:check
cargo test --workspace
cargo clippy --all --all-targets --all-features -- -D warnings
cargo clippy --all --all-targets --all-features

7
.gitignore vendored
View File

@@ -37,10 +37,6 @@ yarn-error.log*
ehthumbs.db
Thumbs.db
# Logs
*.log
logs/
# Runtime data
pids
*.pid
@@ -74,6 +70,7 @@ backend/bindings
build-npm-package-codesign.sh
npx-cli/dist
npx-cli/vibe-kanban-*
backend/db.sqlite
# Development ports file
@@ -82,3 +79,5 @@ backend/db.sqlite
dev_assets
/frontend/.env.sentry-build-plugin
.ssh
vibe-kanban-cloud/

192
CLAUDE.md
View File

@@ -2,116 +2,128 @@
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Development Commands
## Essential Commands
### Core Development
- `pnpm run dev` - Start both frontend (port 3000) and backend (port 3001) with live reload
- `pnpm run check` - Run cargo check and TypeScript type checking - **always run this before committing**
- `pnpm run generate-types` - Generate TypeScript types from Rust structs (run after modifying Rust types)
### Development
```bash
# Start development servers with hot reload (frontend + backend)
pnpm run dev
### Testing and Validation
- `pnpm run frontend:dev` - Start frontend development server only
- `pnpm run backend:dev` - Start Rust backend only
- `cargo test` - Run Rust unit tests from backend directory
- `cargo fmt` - Format Rust code
- `cargo clippy` - Run Rust linter
# Individual dev servers
npm run frontend:dev # Frontend only (port 3000)
npm run backend:dev # Backend only (port auto-assigned)
### Building
- `./build-npm-package.sh` - Build production package for distribution
- `cargo build --release` - Build optimized Rust binary
# Build production version
./build-npm-package.sh
```
### Testing & Validation
```bash
# Run all checks (frontend + backend)
npm run check
# Frontend specific
cd frontend && npm run lint # Lint TypeScript/React code
cd frontend && npm run format:check # Check formatting
cd frontend && npx tsc --noEmit # TypeScript type checking
# Backend specific
cargo test --workspace # Run all Rust tests
cargo test -p <crate_name> # Test specific crate
cargo test test_name # Run specific test
cargo fmt --all -- --check # Check Rust formatting
cargo clippy --all --all-targets --all-features -- -D warnings # Linting
# Type generation (after modifying Rust types)
npm run generate-types # Regenerate TypeScript types from Rust
npm run generate-types:check # Verify types are up to date
```
### Database Operations
```bash
# SQLx migrations
sqlx migrate run # Apply migrations
sqlx database create # Create database
# Database is auto-copied from dev_assets_seed/ on dev server start
```
## Architecture Overview
### Tech Stack
- **Backend**: Rust with Axum web framework, SQLite + SQLX, Tokio async runtime
- **Frontend**: React 18 + TypeScript, Vite, Tailwind CSS, Radix UI
- **Package Management**: pnpm workspace monorepo
- **Type Sharing**: Rust types exported to TypeScript via `ts-rs`
- **Backend**: Rust with Axum web framework, Tokio async runtime, SQLx for database
- **Frontend**: React 18 + TypeScript + Vite, Tailwind CSS, shadcn/ui components
- **Database**: SQLite with SQLx migrations
- **Type Sharing**: ts-rs generates TypeScript types from Rust structs
- **MCP Server**: Built-in Model Context Protocol server for AI agent integration
### Core Concepts
### Project Structure
```
crates/
├── server/ # Axum HTTP server, API routes, MCP server
├── db/ # Database models, migrations, SQLx queries
├── executors/ # AI coding agent integrations (Claude, Gemini, etc.)
├── services/ # Business logic, GitHub, auth, git operations
├── local-deployment/ # Local deployment logic
└── utils/ # Shared utilities
**Vibe Kanban** is an AI coding agent orchestration platform that manages multiple coding agents (Claude Code, Gemini CLI, Amp, etc.) through a unified interface.
frontend/ # React application
├── src/
│ ├── components/ # React components (TaskCard, ProjectCard, etc.)
│ ├── pages/ # Route pages
│ ├── hooks/ # Custom React hooks (useEventSourceManager, etc.)
│ └── lib/ # API client, utilities
**Project Structure**:
- `/backend/src/` - Rust backend with API endpoints, database, and agent executors
- `/frontend/src/` - React frontend with task management UI
- `/backend/migrations/` - SQLite database schema migrations
- `/shared-types/` - Generated TypeScript types from Rust structs
shared/types.ts # Auto-generated TypeScript types from Rust
```
**Executor System**: Each AI agent is implemented as an executor in `/backend/src/executors/`:
- `claude.rs` - Claude Code integration
- `gemini.rs` - Google Gemini CLI
- `amp.rs` - Amp coding agent
- `dev_server.rs` - Development server management
- `echo.rs` - Test/debug executor
### Key Architectural Patterns
**Key Backend Modules**:
- `/backend/src/api/` - REST API endpoints
- `/backend/src/db/` - Database models and queries
- `/backend/src/github/` - GitHub OAuth and API integration
- `/backend/src/git/` - Git operations and worktree management
- `/backend/src/mcp/` - Model Context Protocol server implementation
1. **Event Streaming**: Server-Sent Events (SSE) for real-time updates
- Process logs stream to frontend via `/api/events/processes/:id/logs`
- Task diffs stream via `/api/events/task-attempts/:id/diff`
### Database Schema
SQLite database with core entities:
- `projects` - Coding projects with GitHub repo integration
- `tasks` - Individual tasks assigned to executors
- `processes` - Execution processes with streaming logs
- `github_users`, `github_repos` - GitHub integration data
2. **Git Worktree Management**: Each task execution gets isolated git worktree
- Managed by `WorktreeManager` service
- Automatic cleanup of orphaned worktrees
### API Architecture
- RESTful endpoints at `/api/` prefix
- WebSocket streaming for real-time task updates at `/api/stream/:process_id`
- GitHub OAuth flow with PKCE
- MCP server exposed for external tool integration
3. **Executor Pattern**: Pluggable AI agent executors
- Each executor (Claude, Gemini, etc.) implements common interface
- Actions: `coding_agent_initial`, `coding_agent_follow_up`, `script`
## Development Guidelines
4. **MCP Integration**: Vibe Kanban acts as MCP server
- Tools: `list_projects`, `list_tasks`, `create_task`, `update_task`, etc.
- AI agents can manage tasks via MCP protocol
### Type management
- First ensure that `src/bin/generate_types.rs` is up to date with the types in the project
- **Always regenerate types after modifying Rust structs**: Run `pnpm run generate-types`
- Backend-first development: Define data structures in Rust, export to frontend
- Use `#[derive(Serialize, Deserialize, PartialEq, Debug, Clone, TS)]` for shared types
### API Patterns
### Code Style
- **Rust**: Use rustfmt, follow snake_case naming, leverage tokio for async operations
- **TypeScript**: Strict mode enabled, use `@/` path aliases for imports
- **React**: Functional components with hooks, avoid class components
- REST endpoints under `/api/*`
- Frontend dev server proxies to backend (configured in vite.config.ts)
- Authentication via GitHub OAuth (device flow)
- All database queries in `crates/db/src/models/`
### Git Integration Features
- Automatic branch creation per task
- Git worktree management for concurrent development
- GitHub PR creation and monitoring
- Commit streaming and real-time git status updates
### Development Workflow
### MCP Server Integration
Built-in MCP server provides task management tools:
- `create_task`, `update_task`, `delete_task`
- `list_tasks`, `get_task`, `list_projects`
- Requires `project_id` for most operations
1. **Backend changes first**: When modifying both frontend and backend, start with backend
2. **Type generation**: Run `npm run generate-types` after modifying Rust types
3. **Database migrations**: Create in `crates/db/migrations/`, apply with `sqlx migrate run`
4. **Component patterns**: Follow existing patterns in `frontend/src/components/`
### Process Execution
- All agent executions run as managed processes with streaming logs
- Process lifecycle: queued → running → completed/failed
- Real-time updates via WebSocket connections
- Automatic cleanup of completed processes
### Testing Strategy
### Environment Configuration
- Backend runs on port 3001, frontend proxies API calls in development
- GitHub OAuth requires `GITHUB_CLIENT_ID` and `GITHUB_CLIENT_SECRET`
- Optional PostHog analytics integration
- Rust nightly toolchain required (version 2025-05-18 or later)
- **Unit tests**: Colocated with code in each crate
- **Integration tests**: In `tests/` directory of relevant crates
- **Frontend tests**: TypeScript compilation and linting only
- **CI/CD**: GitHub Actions workflow in `.github/workflows/test.yml`
## Testing Strategy
- Run `pnpm run check` to validate both Rust and TypeScript code
- Use `cargo test` for backend unit tests
- Frontend testing focuses on component integration
- Process execution testing via echo executor
### Environment Variables
## Key Dependencies
- **axum** - Web framework and routing
- **sqlx** - Database operations with compile-time query checking
- **octocrab** - GitHub API client
- **rmcp** - MCP server implementation
- **@dnd-kit** - Drag-and-drop task management
- **react-router-dom** - Frontend routing
Build-time (set when building):
- `GITHUB_CLIENT_ID`: GitHub OAuth app ID (default: Bloop AI's app)
- `POSTHOG_API_KEY`: Analytics key (optional)
Runtime:
- `BACKEND_PORT`: Backend server port (default: auto-assign)
- `FRONTEND_PORT`: Frontend dev port (default: 3000)
- `HOST`: Backend host (default: 127.0.0.1)
- `DISABLE_WORKTREE_ORPHAN_CLEANUP`: Debug flag for worktrees

View File

@@ -1,19 +1,21 @@
[workspace]
resolver = "2"
members = ["backend"]
members = ["crates/server", "crates/db", "crates/executors", "crates/services", "crates/utils", "crates/local-deployment", "crates/deployment"]
[workspace.dependencies]
tokio = { version = "1.0", features = ["full"] }
axum = { version = "0.7", features = ["macros"] }
axum = { version = "0.8.4", features = ["macros"] }
tower-http = { version = "0.5", features = ["cors"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
anyhow = "1.0"
thiserror = "2.0.12"
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
openssl-sys = { version = "0.9", features = ["vendored"] }
ts-rs = { git = "https://github.com/xazukx/ts-rs.git", branch = "use-ts-enum", features = ["uuid-impl", "chrono-impl", "no-serde-warnings"] }
[profile.release]
debug = true
split-debuginfo = "packed"
strip = true
strip = true

View File

@@ -20,4 +20,4 @@ $Toast = [Windows.UI.Notifications.ToastNotification]::new($SerializedXml)
$Toast.Tag = $AppName
$Toast.Group = $AppName
$Notifier = [Windows.UI.Notifications.ToastNotificationManager]::CreateToastNotifier($AppName)
$Notifier.Show($Toast)
$Notifier.Show($Toast)

View File

@@ -1,12 +0,0 @@
{
"db_name": "SQLite",
"query": "UPDATE executor_sessions\n SET session_id = $1, updated_at = datetime('now')\n WHERE execution_process_id = $2",
"describe": {
"columns": [],
"parameters": {
"Right": 2
},
"nullable": []
},
"hash": "01b7e2bac1261d8be3d03c03df3e5220590da6c31c77f161074fc62752d63881"
}

View File

@@ -1,12 +0,0 @@
{
"db_name": "SQLite",
"query": "UPDATE task_attempts SET setup_completed_at = datetime('now'), updated_at = datetime('now') WHERE id = ?",
"describe": {
"columns": [],
"parameters": {
"Right": 1
},
"nullable": []
},
"hash": "1c7b06ba1e112abf6b945a2ff08a0b40ec23f3738c2e7399f067b558cf8d490e"
}

View File

@@ -1,104 +0,0 @@
{
"db_name": "SQLite",
"query": "SELECT \n id as \"id!: Uuid\", \n task_attempt_id as \"task_attempt_id!: Uuid\", \n process_type as \"process_type!: ExecutionProcessType\",\n executor_type,\n status as \"status!: ExecutionProcessStatus\",\n command, \n args, \n working_directory, \n stdout, \n stderr, \n exit_code,\n started_at as \"started_at!: DateTime<Utc>\",\n completed_at as \"completed_at?: DateTime<Utc>\",\n created_at as \"created_at!: DateTime<Utc>\", \n updated_at as \"updated_at!: DateTime<Utc>\"\n FROM execution_processes \n WHERE status = 'running' \n ORDER BY created_at ASC",
"describe": {
"columns": [
{
"name": "id!: Uuid",
"ordinal": 0,
"type_info": "Blob"
},
{
"name": "task_attempt_id!: Uuid",
"ordinal": 1,
"type_info": "Blob"
},
{
"name": "process_type!: ExecutionProcessType",
"ordinal": 2,
"type_info": "Text"
},
{
"name": "executor_type",
"ordinal": 3,
"type_info": "Text"
},
{
"name": "status!: ExecutionProcessStatus",
"ordinal": 4,
"type_info": "Text"
},
{
"name": "command",
"ordinal": 5,
"type_info": "Text"
},
{
"name": "args",
"ordinal": 6,
"type_info": "Text"
},
{
"name": "working_directory",
"ordinal": 7,
"type_info": "Text"
},
{
"name": "stdout",
"ordinal": 8,
"type_info": "Text"
},
{
"name": "stderr",
"ordinal": 9,
"type_info": "Text"
},
{
"name": "exit_code",
"ordinal": 10,
"type_info": "Integer"
},
{
"name": "started_at!: DateTime<Utc>",
"ordinal": 11,
"type_info": "Text"
},
{
"name": "completed_at?: DateTime<Utc>",
"ordinal": 12,
"type_info": "Text"
},
{
"name": "created_at!: DateTime<Utc>",
"ordinal": 13,
"type_info": "Text"
},
{
"name": "updated_at!: DateTime<Utc>",
"ordinal": 14,
"type_info": "Text"
}
],
"parameters": {
"Right": 0
},
"nullable": [
true,
false,
false,
true,
false,
false,
true,
false,
true,
true,
true,
false,
true,
false,
false
]
},
"hash": "1f619f01f46859a64ded531dd0ef61abacfe62e758abe7030a6aa745140b95ca"
}

View File

@@ -1,12 +0,0 @@
{
"db_name": "SQLite",
"query": "UPDATE task_attempts SET pr_status = $1, pr_merged_at = $2, merge_commit = $3, updated_at = datetime('now') WHERE id = $4",
"describe": {
"columns": [],
"parameters": {
"Right": 4
},
"nullable": []
},
"hash": "1fca1ce14b4b20205364cd1f1f45ebe1d2e30cd745e59e189d56487b5639dfbb"
}

View File

@@ -1,12 +0,0 @@
{
"db_name": "SQLite",
"query": "UPDATE execution_processes SET stderr = COALESCE(stderr, '') || $1, updated_at = datetime('now') WHERE id = $2",
"describe": {
"columns": [],
"parameters": {
"Right": 2
},
"nullable": []
},
"hash": "36c9e3dd10648e94b949db5c91a774ecb1e10a899ef95da74066eccedca4d8b2"
}

View File

@@ -1,104 +0,0 @@
{
"db_name": "SQLite",
"query": "SELECT \n ep.id as \"id!: Uuid\", \n ep.task_attempt_id as \"task_attempt_id!: Uuid\", \n ep.process_type as \"process_type!: ExecutionProcessType\",\n ep.executor_type,\n ep.status as \"status!: ExecutionProcessStatus\",\n ep.command, \n ep.args, \n ep.working_directory, \n ep.stdout, \n ep.stderr, \n ep.exit_code,\n ep.started_at as \"started_at!: DateTime<Utc>\",\n ep.completed_at as \"completed_at?: DateTime<Utc>\",\n ep.created_at as \"created_at!: DateTime<Utc>\", \n ep.updated_at as \"updated_at!: DateTime<Utc>\"\n FROM execution_processes ep\n JOIN task_attempts ta ON ep.task_attempt_id = ta.id\n JOIN tasks t ON ta.task_id = t.id\n WHERE ep.status = 'running' \n AND ep.process_type = 'devserver'\n AND t.project_id = $1\n ORDER BY ep.created_at ASC",
"describe": {
"columns": [
{
"name": "id!: Uuid",
"ordinal": 0,
"type_info": "Blob"
},
{
"name": "task_attempt_id!: Uuid",
"ordinal": 1,
"type_info": "Blob"
},
{
"name": "process_type!: ExecutionProcessType",
"ordinal": 2,
"type_info": "Text"
},
{
"name": "executor_type",
"ordinal": 3,
"type_info": "Text"
},
{
"name": "status!: ExecutionProcessStatus",
"ordinal": 4,
"type_info": "Text"
},
{
"name": "command",
"ordinal": 5,
"type_info": "Text"
},
{
"name": "args",
"ordinal": 6,
"type_info": "Text"
},
{
"name": "working_directory",
"ordinal": 7,
"type_info": "Text"
},
{
"name": "stdout",
"ordinal": 8,
"type_info": "Text"
},
{
"name": "stderr",
"ordinal": 9,
"type_info": "Text"
},
{
"name": "exit_code",
"ordinal": 10,
"type_info": "Integer"
},
{
"name": "started_at!: DateTime<Utc>",
"ordinal": 11,
"type_info": "Text"
},
{
"name": "completed_at?: DateTime<Utc>",
"ordinal": 12,
"type_info": "Text"
},
{
"name": "created_at!: DateTime<Utc>",
"ordinal": 13,
"type_info": "Text"
},
{
"name": "updated_at!: DateTime<Utc>",
"ordinal": 14,
"type_info": "Text"
}
],
"parameters": {
"Right": 1
},
"nullable": [
true,
false,
false,
true,
false,
false,
true,
false,
true,
true,
true,
false,
true,
false,
false
]
},
"hash": "412bacd3477d86369082e90f52240407abce436cb81292d42b2dbe1e5c18eea1"
}

View File

@@ -1,12 +0,0 @@
{
"db_name": "SQLite",
"query": "UPDATE task_attempts SET worktree_path = $1, worktree_deleted = FALSE, setup_completed_at = NULL, updated_at = datetime('now') WHERE id = $2",
"describe": {
"columns": [],
"parameters": {
"Right": 2
},
"nullable": []
},
"hash": "5b902137b11022d2e1a5c4f6a9c83fec1a856c6a710aff831abd2382ede76b43"
}

View File

@@ -1,104 +0,0 @@
{
"db_name": "SQLite",
"query": "INSERT INTO execution_processes (\n id, task_attempt_id, process_type, executor_type, status, command, args, \n working_directory, stdout, stderr, exit_code, started_at, \n completed_at, created_at, updated_at\n ) \n VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15) \n RETURNING \n id as \"id!: Uuid\", \n task_attempt_id as \"task_attempt_id!: Uuid\", \n process_type as \"process_type!: ExecutionProcessType\",\n executor_type,\n status as \"status!: ExecutionProcessStatus\",\n command, \n args, \n working_directory, \n stdout, \n stderr, \n exit_code,\n started_at as \"started_at!: DateTime<Utc>\",\n completed_at as \"completed_at?: DateTime<Utc>\",\n created_at as \"created_at!: DateTime<Utc>\", \n updated_at as \"updated_at!: DateTime<Utc>\"",
"describe": {
"columns": [
{
"name": "id!: Uuid",
"ordinal": 0,
"type_info": "Blob"
},
{
"name": "task_attempt_id!: Uuid",
"ordinal": 1,
"type_info": "Blob"
},
{
"name": "process_type!: ExecutionProcessType",
"ordinal": 2,
"type_info": "Text"
},
{
"name": "executor_type",
"ordinal": 3,
"type_info": "Text"
},
{
"name": "status!: ExecutionProcessStatus",
"ordinal": 4,
"type_info": "Text"
},
{
"name": "command",
"ordinal": 5,
"type_info": "Text"
},
{
"name": "args",
"ordinal": 6,
"type_info": "Text"
},
{
"name": "working_directory",
"ordinal": 7,
"type_info": "Text"
},
{
"name": "stdout",
"ordinal": 8,
"type_info": "Text"
},
{
"name": "stderr",
"ordinal": 9,
"type_info": "Text"
},
{
"name": "exit_code",
"ordinal": 10,
"type_info": "Integer"
},
{
"name": "started_at!: DateTime<Utc>",
"ordinal": 11,
"type_info": "Text"
},
{
"name": "completed_at?: DateTime<Utc>",
"ordinal": 12,
"type_info": "Text"
},
{
"name": "created_at!: DateTime<Utc>",
"ordinal": 13,
"type_info": "Text"
},
{
"name": "updated_at!: DateTime<Utc>",
"ordinal": 14,
"type_info": "Text"
}
],
"parameters": {
"Right": 15
},
"nullable": [
true,
false,
false,
true,
false,
false,
true,
false,
true,
true,
true,
false,
true,
false,
false
]
},
"hash": "5ed1238e52e59bb5f76c0f153fd99a14093f7ce2585bf9843585608f17ec575b"
}

View File

@@ -1,12 +0,0 @@
{
"db_name": "SQLite",
"query": "UPDATE executor_sessions \n SET summary = $1, updated_at = datetime('now') \n WHERE execution_process_id = $2",
"describe": {
"columns": [],
"parameters": {
"Right": 2
},
"nullable": []
},
"hash": "8a67b3b3337248f06a57bdf8a908f7ef23177431eaed82dc08c94c3e5944340e"
}

View File

@@ -1,104 +0,0 @@
{
"db_name": "SQLite",
"query": "SELECT \n id as \"id!: Uuid\", \n task_attempt_id as \"task_attempt_id!: Uuid\", \n process_type as \"process_type!: ExecutionProcessType\",\n executor_type,\n status as \"status!: ExecutionProcessStatus\",\n command, \n args, \n working_directory, \n stdout, \n stderr, \n exit_code,\n started_at as \"started_at!: DateTime<Utc>\",\n completed_at as \"completed_at?: DateTime<Utc>\",\n created_at as \"created_at!: DateTime<Utc>\", \n updated_at as \"updated_at!: DateTime<Utc>\"\n FROM execution_processes \n WHERE task_attempt_id = $1 \n ORDER BY created_at ASC",
"describe": {
"columns": [
{
"name": "id!: Uuid",
"ordinal": 0,
"type_info": "Blob"
},
{
"name": "task_attempt_id!: Uuid",
"ordinal": 1,
"type_info": "Blob"
},
{
"name": "process_type!: ExecutionProcessType",
"ordinal": 2,
"type_info": "Text"
},
{
"name": "executor_type",
"ordinal": 3,
"type_info": "Text"
},
{
"name": "status!: ExecutionProcessStatus",
"ordinal": 4,
"type_info": "Text"
},
{
"name": "command",
"ordinal": 5,
"type_info": "Text"
},
{
"name": "args",
"ordinal": 6,
"type_info": "Text"
},
{
"name": "working_directory",
"ordinal": 7,
"type_info": "Text"
},
{
"name": "stdout",
"ordinal": 8,
"type_info": "Text"
},
{
"name": "stderr",
"ordinal": 9,
"type_info": "Text"
},
{
"name": "exit_code",
"ordinal": 10,
"type_info": "Integer"
},
{
"name": "started_at!: DateTime<Utc>",
"ordinal": 11,
"type_info": "Text"
},
{
"name": "completed_at?: DateTime<Utc>",
"ordinal": 12,
"type_info": "Text"
},
{
"name": "created_at!: DateTime<Utc>",
"ordinal": 13,
"type_info": "Text"
},
{
"name": "updated_at!: DateTime<Utc>",
"ordinal": 14,
"type_info": "Text"
}
],
"parameters": {
"Right": 1
},
"nullable": [
true,
false,
false,
true,
false,
false,
true,
false,
true,
true,
true,
false,
true,
false,
false
]
},
"hash": "9472c8fb477958167f5fae40b85ac44252468c5226b2cdd7770f027332eed6d7"
}

View File

@@ -1,12 +0,0 @@
{
"db_name": "SQLite",
"query": "DELETE FROM tasks WHERE id = $1 AND project_id = $2",
"describe": {
"columns": [],
"parameters": {
"Right": 2
},
"nullable": []
},
"hash": "c614e6056b244ca07f1b9d44e7edc9d5819225c6f8d9e077070c6e518a17f50b"
}

View File

@@ -1,12 +0,0 @@
{
"db_name": "SQLite",
"query": "UPDATE tasks SET status = $3, updated_at = CURRENT_TIMESTAMP WHERE id = $1 AND project_id = $2",
"describe": {
"columns": [],
"parameters": {
"Right": 3
},
"nullable": []
},
"hash": "d2d0a1b985ebbca6a2b3e882a221a219f3199890fa640afc946ef1a792d6d8de"
}

View File

@@ -1,12 +0,0 @@
{
"db_name": "SQLite",
"query": "UPDATE execution_processes SET stdout = COALESCE(stdout, '') || $1, updated_at = datetime('now') WHERE id = $2",
"describe": {
"columns": [],
"parameters": {
"Right": 2
},
"nullable": []
},
"hash": "ed8456646fa69ddd412441955f06ff22bfb790f29466450735e0b8bb1bc4ec94"
}

View File

@@ -1,240 +0,0 @@
use std::{collections::HashMap, path::PathBuf, sync::Arc};
use tokio::sync::{Mutex, RwLock as TokioRwLock};
use uuid::Uuid;
use crate::{
command_runner,
models::Environment,
services::{generate_user_id, AnalyticsConfig, AnalyticsService},
};
#[derive(Debug)]
pub enum ExecutionType {
SetupScript,
CleanupScript,
CodingAgent,
DevServer,
}
#[derive(Debug)]
pub struct RunningExecution {
pub task_attempt_id: Uuid,
pub _execution_type: ExecutionType,
pub child: command_runner::CommandProcess,
}
#[derive(Debug, Clone)]
pub struct AppState {
running_executions: Arc<Mutex<HashMap<Uuid, RunningExecution>>>,
pub db_pool: sqlx::SqlitePool,
config: Arc<tokio::sync::RwLock<crate::models::config::Config>>,
pub analytics: Arc<TokioRwLock<AnalyticsService>>,
user_id: String,
pub mode: Environment,
}
impl AppState {
pub async fn new(
db_pool: sqlx::SqlitePool,
config: Arc<tokio::sync::RwLock<crate::models::config::Config>>,
mode: Environment,
) -> Self {
// Initialize analytics with user preferences
let user_enabled = {
let config_guard = config.read().await;
config_guard.analytics_enabled.unwrap_or(true)
};
let analytics_config = AnalyticsConfig::new(user_enabled);
let analytics = Arc::new(TokioRwLock::new(AnalyticsService::new(analytics_config)));
Self {
running_executions: Arc::new(Mutex::new(HashMap::new())),
db_pool,
config,
analytics,
user_id: generate_user_id(),
mode,
}
}
pub async fn update_analytics_config(&self, user_enabled: bool) {
// Check if analytics was disabled before this update
let was_analytics_disabled = {
let analytics = self.analytics.read().await;
!analytics.is_enabled()
};
let new_config = AnalyticsConfig::new(user_enabled);
let new_service = AnalyticsService::new(new_config);
let mut analytics = self.analytics.write().await;
*analytics = new_service;
// If analytics was disabled and is now enabled, fire a session_start event
if was_analytics_disabled && analytics.is_enabled() {
analytics.track_event(&self.user_id, "session_start", None);
}
}
// Running executions getters
pub async fn has_running_execution(&self, attempt_id: Uuid) -> bool {
let executions = self.running_executions.lock().await;
executions
.values()
.any(|exec| exec.task_attempt_id == attempt_id)
}
pub async fn get_running_executions_for_monitor(&self) -> Vec<(Uuid, Uuid, bool, Option<i64>)> {
let mut executions = self.running_executions.lock().await;
let mut completed_executions = Vec::new();
for (execution_id, running_exec) in executions.iter_mut() {
match running_exec.child.try_wait().await {
Ok(Some(status)) => {
let success = status.success();
let exit_code = status.code().map(|c| c as i64);
completed_executions.push((
*execution_id,
running_exec.task_attempt_id,
success,
exit_code,
));
}
Ok(None) => {
// Still running
}
Err(e) => {
tracing::error!("Error checking process status: {}", e);
completed_executions.push((
*execution_id,
running_exec.task_attempt_id,
false,
None,
));
}
}
}
// Remove completed executions from the map
for (execution_id, _, _, _) in &completed_executions {
executions.remove(execution_id);
}
completed_executions
}
// Running executions setters
pub async fn add_running_execution(&self, execution_id: Uuid, execution: RunningExecution) {
let mut executions = self.running_executions.lock().await;
executions.insert(execution_id, execution);
}
pub async fn stop_running_execution_by_id(
&self,
execution_id: Uuid,
) -> Result<bool, Box<dyn std::error::Error + Send + Sync>> {
let mut executions = self.running_executions.lock().await;
let Some(exec) = executions.get_mut(&execution_id) else {
return Ok(false);
};
// Kill the process using CommandRunner's kill method
exec.child
.kill()
.await
.map_err(|e| Box::new(e) as Box<dyn std::error::Error + Send + Sync>)?;
// only NOW remove it
executions.remove(&execution_id);
Ok(true)
}
// Config getters
pub async fn get_sound_alerts_enabled(&self) -> bool {
let config = self.config.read().await;
config.sound_alerts
}
pub async fn get_push_notifications_enabled(&self) -> bool {
let config = self.config.read().await;
config.push_notifications
}
pub async fn get_sound_file(&self) -> crate::models::config::SoundFile {
let config = self.config.read().await;
config.sound_file.clone()
}
pub fn get_config(&self) -> &Arc<tokio::sync::RwLock<crate::models::config::Config>> {
&self.config
}
pub async fn track_analytics_event(
&self,
event_name: &str,
properties: Option<serde_json::Value>,
) {
let analytics = self.analytics.read().await;
if analytics.is_enabled() {
analytics.track_event(&self.user_id, event_name, properties);
} else {
tracing::debug!("Analytics disabled, skipping event: {}", event_name);
}
}
pub async fn update_sentry_scope(&self) {
let config = self.get_config().read().await;
let username = config.github.username.clone();
let email = config.github.primary_email.clone();
drop(config);
let sentry_user = if username.is_some() || email.is_some() {
sentry::User {
id: Some(self.user_id.clone()),
username,
email,
..Default::default()
}
} else {
sentry::User {
id: Some(self.user_id.clone()),
..Default::default()
}
};
sentry::configure_scope(|scope| {
scope.set_user(Some(sentry_user));
});
}
/// Get the workspace directory path, creating it if it doesn't exist in cloud mode
pub async fn get_workspace_path(
&self,
) -> Result<PathBuf, Box<dyn std::error::Error + Send + Sync>> {
if !self.mode.is_cloud() {
return Err("Workspace directory only available in cloud mode".into());
}
let workspace_path = {
let config = self.config.read().await;
match &config.workspace_dir {
Some(dir) => PathBuf::from(dir),
None => {
// Use default workspace directory
let home_dir = dirs::home_dir().ok_or("Could not find home directory")?;
home_dir.join(".vibe-kanban").join("projects")
}
}
};
// Create the workspace directory if it doesn't exist
if !workspace_path.exists() {
std::fs::create_dir_all(&workspace_path)
.map_err(|e| format!("Failed to create workspace directory: {}", e))?;
tracing::info!("Created workspace directory: {}", workspace_path.display());
}
Ok(workspace_path)
}
}

View File

@@ -1,401 +0,0 @@
use std::{collections::HashMap, sync::Arc};
use axum::{
body::Body,
extract::{Path, State},
http::StatusCode,
response::{Json, Response},
routing::{delete, get, post},
Router,
};
use serde::Serialize;
use tokio::sync::Mutex;
use tokio_util::io::ReaderStream;
use tracing_subscriber::prelude::*;
use uuid::Uuid;
use vibe_kanban::command_runner::{CommandProcess, CommandRunner, CommandRunnerArgs};
// Structure to hold process and its streams
struct ProcessEntry {
process: CommandProcess,
// Store the actual stdout/stderr streams for direct streaming
stdout_stream: Option<Box<dyn tokio::io::AsyncRead + Unpin + Send>>,
stderr_stream: Option<Box<dyn tokio::io::AsyncRead + Unpin + Send>>,
completed: Arc<Mutex<bool>>,
}
impl std::fmt::Debug for ProcessEntry {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("ProcessEntry")
.field("process", &self.process)
.field("stdout_stream", &self.stdout_stream.is_some())
.field("stderr_stream", &self.stderr_stream.is_some())
.field("completed", &self.completed)
.finish()
}
}
// Application state to manage running processes
#[derive(Clone)]
struct AppState {
processes: Arc<Mutex<HashMap<String, ProcessEntry>>>,
}
// Response type for API responses
#[derive(Debug, Serialize)]
struct ApiResponse<T> {
success: bool,
data: Option<T>,
error: Option<String>,
}
impl<T> ApiResponse<T> {
fn success(data: T) -> Self {
Self {
success: true,
data: Some(data),
error: None,
}
}
#[allow(dead_code)]
fn error(message: String) -> Self {
Self {
success: false,
data: None,
error: Some(message),
}
}
}
// Response type for command creation
#[derive(Debug, Serialize)]
struct CreateCommandResponse {
process_id: String,
}
// Response type for process status
#[derive(Debug, Serialize)]
struct ProcessStatusResponse {
process_id: String,
running: bool,
exit_code: Option<i32>,
success: Option<bool>,
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize tracing
tracing_subscriber::registry()
.with(
tracing_subscriber::EnvFilter::try_from_default_env()
.unwrap_or_else(|_| "cloud_runner=info".into()),
)
.with(tracing_subscriber::fmt::layer())
.init();
// Create application state
let app_state = AppState {
processes: Arc::new(Mutex::new(HashMap::new())),
};
// Build router
let app = Router::new()
.route("/health", get(health_check))
.route("/commands", post(create_command))
.route("/commands/:process_id", delete(kill_command))
.route("/commands/:process_id/status", get(get_process_status))
.route("/commands/:process_id/stdout", get(get_process_stdout))
.route("/commands/:process_id/stderr", get(get_process_stderr))
.with_state(app_state);
// Get port from environment or default to 8000
let port = std::env::var("PORT").unwrap_or_else(|_| "8000".to_string());
let addr = format!("0.0.0.0:{}", port);
tracing::info!("Cloud Runner server starting on {}", addr);
// Start the server
let listener = tokio::net::TcpListener::bind(&addr).await?;
axum::serve(listener, app).await?;
Ok(())
}
// Health check endpoint
async fn health_check() -> Json<ApiResponse<String>> {
Json(ApiResponse::success("Cloud Runner is healthy".to_string()))
}
// Create and start a new command
async fn create_command(
State(state): State<AppState>,
Json(request): Json<CommandRunnerArgs>,
) -> Result<Json<ApiResponse<CreateCommandResponse>>, StatusCode> {
tracing::info!("Creating command: {} {:?}", request.command, request.args);
// Create a local command runner from the request
let runner = CommandRunner::from_args(request);
// Start the process
let mut process = match runner.start().await {
Ok(process) => process,
Err(e) => {
tracing::error!("Failed to start command: {}", e);
return Err(StatusCode::INTERNAL_SERVER_ERROR);
}
};
// Generate unique process ID
let process_id = Uuid::new_v4().to_string();
// Create completion flag
let completed = Arc::new(Mutex::new(false));
// Get the streams from the process - we'll store them directly
let mut streams = match process.stream().await {
Ok(streams) => streams,
Err(e) => {
tracing::error!("Failed to get process streams: {}", e);
return Err(StatusCode::INTERNAL_SERVER_ERROR);
}
};
// Extract the streams for direct use
let stdout_stream = streams.stdout.take();
let stderr_stream = streams.stderr.take();
// Spawn a task to monitor process completion
{
let process_id_for_completion = process_id.clone();
let completed_flag = completed.clone();
let processes_ref = state.processes.clone();
tokio::spawn(async move {
// Wait for the process to complete
if let Ok(mut processes) = processes_ref.try_lock() {
if let Some(entry) = processes.get_mut(&process_id_for_completion) {
let _ = entry.process.wait().await;
*completed_flag.lock().await = true;
tracing::debug!("Marked process {} as completed", process_id_for_completion);
}
}
});
}
// Create process entry
let entry = ProcessEntry {
process,
stdout_stream,
stderr_stream,
completed: completed.clone(),
};
// Store the process entry
{
let mut processes = state.processes.lock().await;
processes.insert(process_id.clone(), entry);
}
tracing::info!("Command started with process_id: {}", process_id);
Ok(Json(ApiResponse::success(CreateCommandResponse {
process_id,
})))
}
// Kill a running command
async fn kill_command(
State(state): State<AppState>,
Path(process_id): Path<String>,
) -> Result<Json<ApiResponse<String>>, StatusCode> {
tracing::info!("Killing command with process_id: {}", process_id);
let mut processes = state.processes.lock().await;
if let Some(mut entry) = processes.remove(&process_id) {
// First check if the process has already finished
match entry.process.status().await {
Ok(Some(_)) => {
// Process already finished, consider kill successful
tracing::info!(
"Process {} already completed, kill considered successful",
process_id
);
Ok(Json(ApiResponse::success(
"Process was already completed".to_string(),
)))
}
Ok(None) => {
// Process still running, attempt to kill
match entry.process.kill().await {
Ok(()) => {
tracing::info!("Successfully killed process: {}", process_id);
Ok(Json(ApiResponse::success(
"Process killed successfully".to_string(),
)))
}
Err(e) => {
tracing::error!("Failed to kill process {}: {}", process_id, e);
// Check if it's a "No such process" error (process finished during kill)
if e.to_string().contains("No such process") {
tracing::info!("Process {} finished during kill attempt", process_id);
Ok(Json(ApiResponse::success(
"Process finished during kill attempt".to_string(),
)))
} else {
Err(StatusCode::INTERNAL_SERVER_ERROR)
}
}
}
}
Err(e) => {
tracing::error!("Failed to check status for process {}: {}", process_id, e);
// Still attempt to kill
match entry.process.kill().await {
Ok(()) => {
tracing::info!("Successfully killed process: {}", process_id);
Ok(Json(ApiResponse::success(
"Process killed successfully".to_string(),
)))
}
Err(e) => {
tracing::error!("Failed to kill process {}: {}", process_id, e);
Err(StatusCode::INTERNAL_SERVER_ERROR)
}
}
}
}
} else {
tracing::warn!("Process not found: {}", process_id);
Err(StatusCode::NOT_FOUND)
}
}
// Get status of a running command
async fn get_process_status(
State(state): State<AppState>,
Path(process_id): Path<String>,
) -> Result<Json<ApiResponse<ProcessStatusResponse>>, StatusCode> {
tracing::info!("Getting status for process_id: {}", process_id);
let mut processes = state.processes.lock().await;
if let Some(entry) = processes.get_mut(&process_id) {
match entry.process.status().await {
Ok(Some(exit_status)) => {
// Process has completed
let response = ProcessStatusResponse {
process_id: process_id.clone(),
running: false,
exit_code: exit_status.code(),
success: Some(exit_status.success()),
};
Ok(Json(ApiResponse::success(response)))
}
Ok(None) => {
// Process is still running
let response = ProcessStatusResponse {
process_id: process_id.clone(),
running: true,
exit_code: None,
success: None,
};
Ok(Json(ApiResponse::success(response)))
}
Err(e) => {
tracing::error!("Failed to get status for process {}: {}", process_id, e);
Err(StatusCode::INTERNAL_SERVER_ERROR)
}
}
} else {
tracing::warn!("Process not found: {}", process_id);
Err(StatusCode::NOT_FOUND)
}
}
// Get stdout stream for a running command (direct streaming, no buffering)
async fn get_process_stdout(
State(state): State<AppState>,
Path(process_id): Path<String>,
) -> Result<Response, StatusCode> {
tracing::info!(
"Starting direct stdout stream for process_id: {}",
process_id
);
let mut processes = state.processes.lock().await;
if let Some(entry) = processes.get_mut(&process_id) {
// Take ownership of stdout directly for streaming
if let Some(stdout) = entry.stdout_stream.take() {
drop(processes); // Release the lock early
// Convert the AsyncRead (stdout) directly into an HTTP stream
let stream = ReaderStream::new(stdout);
let response = Response::builder()
.header("content-type", "application/octet-stream")
.header("cache-control", "no-cache")
.body(Body::from_stream(stream))
.map_err(|e| {
tracing::error!("Failed to build response stream: {}", e);
StatusCode::INTERNAL_SERVER_ERROR
})?;
Ok(response)
} else {
tracing::error!(
"Stdout already taken or unavailable for process {}",
process_id
);
Err(StatusCode::GONE)
}
} else {
tracing::warn!("Process not found for stdout: {}", process_id);
Err(StatusCode::NOT_FOUND)
}
}
// Get stderr stream for a running command (direct streaming, no buffering)
async fn get_process_stderr(
State(state): State<AppState>,
Path(process_id): Path<String>,
) -> Result<Response, StatusCode> {
tracing::info!(
"Starting direct stderr stream for process_id: {}",
process_id
);
let mut processes = state.processes.lock().await;
if let Some(entry) = processes.get_mut(&process_id) {
// Take ownership of stderr directly for streaming
if let Some(stderr) = entry.stderr_stream.take() {
drop(processes); // Release the lock early
// Convert the AsyncRead (stderr) directly into an HTTP stream
let stream = ReaderStream::new(stderr);
let response = Response::builder()
.header("content-type", "application/octet-stream")
.header("cache-control", "no-cache")
.body(Body::from_stream(stream))
.map_err(|e| {
tracing::error!("Failed to build response stream: {}", e);
StatusCode::INTERNAL_SERVER_ERROR
})?;
Ok(response)
} else {
tracing::error!(
"Stderr already taken or unavailable for process {}",
process_id
);
Err(StatusCode::GONE)
}
} else {
tracing::warn!("Process not found for stderr: {}", process_id);
Err(StatusCode::NOT_FOUND)
}
}

View File

@@ -1,204 +0,0 @@
use std::{env, fs, path::Path};
use ts_rs::TS;
// in [build-dependencies]
fn generate_constants() -> String {
r#"// Generated constants
export const EXECUTOR_TYPES: string[] = [
"echo",
"claude",
"claude-plan",
"amp",
"gemini",
"charm-opencode",
"claude-code-router",
"sst-opencode",
"aider",
"codex",
];
export const EDITOR_TYPES: EditorType[] = [
"vscode",
"cursor",
"windsurf",
"intellij",
"zed",
"custom"
];
export const EXECUTOR_LABELS: Record<string, string> = {
"echo": "Echo (Test Mode)",
"claude": "Claude Code",
"claude-plan": "Claude Code Plan",
"amp": "Amp",
"gemini": "Gemini",
"charm-opencode": "Charm Opencode",
"claude-code-router": "Claude Code Router",
"sst-opencode": "SST Opencode",
"aider": "Aider",
"codex": "Codex"
};
export const EDITOR_LABELS: Record<string, string> = {
"vscode": "VS Code",
"cursor": "Cursor",
"windsurf": "Windsurf",
"intellij": "IntelliJ IDEA",
"zed": "Zed",
"custom": "Custom"
};
export const MCP_SUPPORTED_EXECUTORS: string[] = [
"claude",
"amp",
"gemini",
"sst-opencode",
"charm-opencode",
"claude-code-router"
];
export const SOUND_FILES: SoundFile[] = [
"abstract-sound1",
"abstract-sound2",
"abstract-sound3",
"abstract-sound4",
"cow-mooing",
"phone-vibration",
"rooster"
];
export const SOUND_LABELS: Record<string, string> = {
"abstract-sound1": "Gentle Chime",
"abstract-sound2": "Soft Bell",
"abstract-sound3": "Digital Tone",
"abstract-sound4": "Subtle Alert",
"cow-mooing": "Cow Mooing",
"phone-vibration": "Phone Vibration",
"rooster": "Rooster Call"
};"#
.to_string()
}
fn generate_types_content() -> String {
// 4. Friendly banner
const HEADER: &str =
"// This file was generated by [ts-rs](https://github.com/Aleph-Alpha/ts-rs).\n\
// Do not edit this file manually.\n\
// Auto-generated from Rust backend types using ts-rs\n\n";
// 5. Add `export` if it's missing, then join
let decls = [
vibe_kanban::models::ApiResponse::<()>::decl(),
vibe_kanban::models::config::Config::decl(),
vibe_kanban::models::config::EnvironmentInfo::decl(),
vibe_kanban::models::config::Environment::decl(),
vibe_kanban::models::config::ThemeMode::decl(),
vibe_kanban::models::config::EditorConfig::decl(),
vibe_kanban::models::config::GitHubConfig::decl(),
vibe_kanban::models::config::EditorType::decl(),
vibe_kanban::models::config::EditorConstants::decl(),
vibe_kanban::models::config::SoundFile::decl(),
vibe_kanban::models::config::SoundConstants::decl(),
vibe_kanban::routes::config::ConfigConstants::decl(),
vibe_kanban::executor::ExecutorConfig::decl(),
vibe_kanban::executor::ExecutorConstants::decl(),
vibe_kanban::models::project::CreateProject::decl(),
vibe_kanban::models::project::CreateProjectFromGitHub::decl(),
vibe_kanban::models::project::Project::decl(),
vibe_kanban::models::project::ProjectWithBranch::decl(),
vibe_kanban::models::project::UpdateProject::decl(),
vibe_kanban::models::project::SearchResult::decl(),
vibe_kanban::models::project::SearchMatchType::decl(),
vibe_kanban::models::project::GitBranch::decl(),
vibe_kanban::models::project::CreateBranch::decl(),
vibe_kanban::models::task::CreateTask::decl(),
vibe_kanban::models::task::CreateTaskAndStart::decl(),
vibe_kanban::models::task::TaskStatus::decl(),
vibe_kanban::models::task::Task::decl(),
vibe_kanban::models::task::TaskWithAttemptStatus::decl(),
vibe_kanban::models::task::UpdateTask::decl(),
vibe_kanban::models::task_template::TaskTemplate::decl(),
vibe_kanban::models::task_template::CreateTaskTemplate::decl(),
vibe_kanban::models::task_template::UpdateTaskTemplate::decl(),
vibe_kanban::models::task_attempt::TaskAttemptStatus::decl(),
vibe_kanban::models::task_attempt::TaskAttempt::decl(),
vibe_kanban::models::task_attempt::CreateTaskAttempt::decl(),
vibe_kanban::models::task_attempt::UpdateTaskAttempt::decl(),
vibe_kanban::models::task_attempt::CreateFollowUpAttempt::decl(),
vibe_kanban::routes::filesystem::DirectoryEntry::decl(),
vibe_kanban::routes::filesystem::DirectoryListResponse::decl(),
vibe_kanban::routes::auth::DeviceStartResponse::decl(),
vibe_kanban::services::github_service::RepositoryInfo::decl(),
vibe_kanban::routes::task_attempts::ProcessLogsResponse::decl(),
vibe_kanban::models::task_attempt::DiffChunkType::decl(),
vibe_kanban::models::task_attempt::DiffChunk::decl(),
vibe_kanban::models::task_attempt::FileDiff::decl(),
vibe_kanban::models::task_attempt::WorktreeDiff::decl(),
vibe_kanban::models::task_attempt::BranchStatus::decl(),
vibe_kanban::models::task_attempt::ExecutionState::decl(),
vibe_kanban::models::task_attempt::TaskAttemptState::decl(),
vibe_kanban::models::execution_process::ExecutionProcess::decl(),
vibe_kanban::models::execution_process::ExecutionProcessSummary::decl(),
vibe_kanban::models::execution_process::ExecutionProcessStatus::decl(),
vibe_kanban::models::execution_process::ExecutionProcessType::decl(),
vibe_kanban::models::execution_process::CreateExecutionProcess::decl(),
vibe_kanban::models::execution_process::UpdateExecutionProcess::decl(),
vibe_kanban::models::executor_session::ExecutorSession::decl(),
vibe_kanban::models::executor_session::CreateExecutorSession::decl(),
vibe_kanban::models::executor_session::UpdateExecutorSession::decl(),
vibe_kanban::executor::NormalizedConversation::decl(),
vibe_kanban::executor::NormalizedEntry::decl(),
vibe_kanban::executor::NormalizedEntryType::decl(),
vibe_kanban::executor::ActionType::decl(),
];
let body = decls
.into_iter()
.map(|d| {
let trimmed = d.trim_start();
if trimmed.starts_with("export") {
d
} else {
format!("export {trimmed}")
}
})
.collect::<Vec<_>>()
.join("\n\n");
let constants = generate_constants();
format!("{HEADER}{body}\n\n{constants}")
}
fn main() {
let args: Vec<String> = env::args().collect();
let check_mode = args.iter().any(|arg| arg == "--check");
// 1. Make sure ../shared exists
let shared_path = Path::new("../shared");
fs::create_dir_all(shared_path).expect("cannot create ../shared");
println!("Generating TypeScript types…");
// 2. Let ts-rs write its per-type files here (handy for debugging)
env::set_var("TS_RS_EXPORT_DIR", shared_path.to_str().unwrap());
let generated = generate_types_content();
let types_path = shared_path.join("types.ts");
if check_mode {
// Read the current file
let current = fs::read_to_string(&types_path).unwrap_or_default();
if current == generated {
println!("✅ shared/types.ts is up to date.");
std::process::exit(0);
} else {
eprintln!("❌ shared/types.ts is not up to date. Please run 'npm run generate-types' and commit the changes.");
std::process::exit(1);
}
} else {
// Write the file as before
fs::write(&types_path, generated).expect("unable to write types.ts");
println!("✅ TypeScript types generated in ../shared/");
}
}

View File

@@ -1,659 +0,0 @@
use std::env;
use vibe_kanban::command_runner::CommandRunner;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Set up remote execution
env::set_var("CLOUD_EXECUTION", "1");
env::set_var("CLOUD_SERVER_URL", "http://localhost:8000");
println!("🚀 Testing remote CommandRunner...");
// Test 1: Simple echo command
println!("\n📝 Test 1: Echo command");
let mut runner = CommandRunner::new();
let mut process = runner
.command("echo")
.arg("Hello from remote!")
.start()
.await?;
println!("✅ Successfully started remote echo command!");
// Kill it (though echo probably finished already)
match process.kill().await {
Ok(()) => println!("✅ Successfully killed echo process"),
Err(e) => println!("⚠️ Kill failed (probably already finished): {}", e),
}
// Test 2: Long-running command
println!("\n⏰ Test 2: Sleep command (5 seconds)");
let mut runner2 = CommandRunner::new();
let mut process2 = runner2.command("sleep").arg("5").start().await?;
println!("✅ Successfully started remote sleep command!");
// Wait a bit then kill it
tokio::time::sleep(tokio::time::Duration::from_secs(1)).await;
process2.kill().await?;
println!("✅ Successfully killed sleep process!");
// Test 3: Command with environment variables
println!("\n🌍 Test 3: Environment variables");
let mut runner3 = CommandRunner::new();
let mut process3 = runner3
.command("printenv")
.arg("TEST_VAR")
.env("TEST_VAR", "remote_test_value")
.start()
.await?;
println!("✅ Successfully started remote printenv command!");
process3.kill().await.ok(); // Don't fail if already finished
// Test 4: Working directory
println!("\n📁 Test 4: Working directory");
let mut runner4 = CommandRunner::new();
let mut process4 = runner4.command("pwd").working_dir("/tmp").start().await?;
println!("✅ Successfully started remote pwd command!");
process4.kill().await.ok(); // Don't fail if already finished
// Test 5: Process Status Checking (TDD - These will FAIL initially)
println!("\n📊 Test 5: Process Status Checking (TDD)");
// Test 5a: Status of running process
let mut runner5a = CommandRunner::new();
let mut process5a = runner5a.command("sleep").arg("3").start().await?;
println!("✅ Started sleep process for status testing");
// This should return None (still running)
match process5a.status().await {
Ok(None) => println!("✅ Status correctly shows process still running"),
Ok(Some(status)) => println!(
"⚠️ Process finished unexpectedly with status: {:?}",
status
),
Err(e) => println!("❌ Status check failed (expected for now): {}", e),
}
// Test try_wait (non-blocking)
match process5a.try_wait().await {
Ok(None) => println!("✅ try_wait correctly shows process still running"),
Ok(Some(status)) => println!(
"⚠️ Process finished unexpectedly with status: {:?}",
status
),
Err(e) => println!("❌ try_wait failed (expected for now): {}", e),
}
// Kill the process to test status of completed process
process5a.kill().await.ok();
// Test 5b: Status of completed process
let mut runner5b = CommandRunner::new();
let mut process5b = runner5b.command("echo").arg("status test").start().await?;
println!("✅ Started echo process for completion status testing");
// Wait for process to complete
tokio::time::sleep(tokio::time::Duration::from_millis(100)).await;
match process5b.status().await {
Ok(Some(status)) => {
println!(
"✅ Status correctly shows completed process: success={}, code={:?}",
status.success(),
status.code()
);
}
Ok(None) => println!("⚠️ Process still running (might need more time)"),
Err(e) => println!("❌ Status check failed (expected for now): {}", e),
}
// Test 5c: Wait for process completion
let mut runner5c = CommandRunner::new();
let mut process5c = runner5c.command("echo").arg("wait test").start().await?;
println!("✅ Started echo process for wait testing");
match process5c.wait().await {
Ok(status) => {
println!(
"✅ Wait completed successfully: success={}, code={:?}",
status.success(),
status.code()
);
}
Err(e) => println!("❌ Wait failed (expected for now): {}", e),
}
// Test 6: Output Streaming (TDD - These will FAIL initially)
println!("\n🌊 Test 6: Output Streaming (TDD)");
// Test 6a: Stdout streaming
let mut runner6a = CommandRunner::new();
let mut process6a = runner6a
.command("echo")
.arg("Hello stdout streaming!")
.start()
.await?;
println!("✅ Started echo process for stdout streaming test");
// Give the server a moment to capture output from fast commands like echo
tokio::time::sleep(tokio::time::Duration::from_millis(200)).await;
match process6a.stream().await {
Ok(mut stream) => {
println!("✅ Got streams from process");
if let Some(stdout) = &mut stream.stdout {
use tokio::io::AsyncReadExt;
let mut buffer = Vec::new();
match stdout.read_to_end(&mut buffer).await {
Ok(bytes_read) => {
let output = String::from_utf8_lossy(&buffer);
if bytes_read > 0 && output.contains("Hello stdout streaming") {
println!("✅ Successfully read stdout: '{}'", output.trim());
} else if bytes_read == 0 {
println!(
"❌ No stdout data received (expected for now - empty streams)"
);
} else {
println!("⚠️ Unexpected stdout content: '{}'", output);
}
}
Err(e) => println!("❌ Failed to read stdout: {}", e),
}
} else {
println!("❌ No stdout stream available (expected for now)");
}
}
Err(e) => println!("❌ Failed to get streams: {}", e),
}
// Test 6b: Stderr streaming
let mut runner6b = CommandRunner::new();
let mut process6b = runner6b
.command("bash")
.arg("-c")
.arg("echo 'Error message' >&2")
.start()
.await?;
println!("✅ Started bash process for stderr streaming test");
// Give the server a moment to capture output from fast commands
tokio::time::sleep(tokio::time::Duration::from_millis(200)).await;
match process6b.stream().await {
Ok(mut stream) => {
if let Some(stderr) = &mut stream.stderr {
use tokio::io::AsyncReadExt;
let mut buffer = Vec::new();
match stderr.read_to_end(&mut buffer).await {
Ok(bytes_read) => {
let output = String::from_utf8_lossy(&buffer);
if bytes_read > 0 && output.contains("Error message") {
println!("✅ Successfully read stderr: '{}'", output.trim());
} else if bytes_read == 0 {
println!(
"❌ No stderr data received (expected for now - empty streams)"
);
} else {
println!("⚠️ Unexpected stderr content: '{}'", output);
}
}
Err(e) => println!("❌ Failed to read stderr: {}", e),
}
} else {
println!("❌ No stderr stream available (expected for now)");
}
}
Err(e) => println!("❌ Failed to get streams: {}", e),
}
// Test 6c: Streaming from long-running process
let mut runner6c = CommandRunner::new();
let mut process6c = runner6c
.command("bash")
.arg("-c")
.arg("for i in {1..3}; do echo \"Line $i\"; sleep 0.1; done")
.start()
.await?;
println!("✅ Started bash process for streaming test");
// Give the server a moment to capture output from the command
tokio::time::sleep(tokio::time::Duration::from_millis(500)).await;
match process6c.stream().await {
Ok(mut stream) => {
if let Some(stdout) = &mut stream.stdout {
use tokio::io::AsyncReadExt;
let mut buffer = [0u8; 1024];
// Try to read some data (this tests real-time streaming)
match tokio::time::timeout(
tokio::time::Duration::from_secs(2),
stdout.read(&mut buffer),
)
.await
{
Ok(Ok(bytes_read)) => {
let output = String::from_utf8_lossy(&buffer[..bytes_read]);
if bytes_read > 0 {
println!("✅ Successfully streamed output: '{}'", output.trim());
} else {
println!("❌ No streaming data received (expected for now)");
}
}
Ok(Err(e)) => println!("❌ Stream read error: {}", e),
Err(_) => {
println!("❌ Stream read timeout (expected for now - no real streaming)")
}
}
} else {
println!("❌ No stdout stream available for streaming test");
}
}
Err(e) => println!("❌ Failed to get streams for streaming test: {}", e),
}
// Clean up
process6c.kill().await.ok();
// Test 7: Server Status API Endpoint (TDD - These will FAIL initially)
println!("\n🔍 Test 7: Server Status API Endpoint (TDD)");
// Create a process first
let client = reqwest::Client::new();
let command_request = serde_json::json!({
"command": "sleep",
"args": ["5"],
"working_dir": null,
"env_vars": [],
"stdin": null
});
let response = client
.post("http://localhost:8000/commands")
.json(&command_request)
.send()
.await?;
if response.status().is_success() {
let body: serde_json::Value = response.json().await?;
if let Some(process_id) = body["data"]["process_id"].as_str() {
println!("✅ Created process for status API test: {}", process_id);
// Test 7a: GET /commands/{id}/status for running process
let status_url = format!("http://localhost:8000/commands/{}/status", process_id);
match client.get(&status_url).send().await {
Ok(response) => {
if response.status().is_success() {
match response.json::<serde_json::Value>().await {
Ok(status_body) => {
println!("✅ Got status response: {}", status_body);
// Check expected structure
if let Some(data) = status_body.get("data") {
if let Some(running) =
data.get("running").and_then(|v| v.as_bool())
{
if running {
println!(
"✅ Status correctly shows process is running"
);
} else {
println!("⚠️ Process already finished");
}
} else {
println!("❌ Missing 'running' field in status response");
}
} else {
println!("❌ Missing 'data' field in status response");
}
}
Err(e) => println!("❌ Failed to parse status JSON: {}", e),
}
} else {
println!(
"❌ Status API returned error: {} (expected for now)",
response.status()
);
}
}
Err(e) => println!("❌ Status API request failed (expected for now): {}", e),
}
// Kill the process
let _ = client
.delete(format!("http://localhost:8000/commands/{}", process_id))
.send()
.await;
}
}
// Test 7b: Status of completed process
let quick_command = serde_json::json!({
"command": "echo",
"args": ["quick command"],
"working_dir": null,
"env_vars": [],
"stdin": null
});
let response = client
.post("http://localhost:8000/commands")
.json(&quick_command)
.send()
.await?;
if response.status().is_success() {
let body: serde_json::Value = response.json().await?;
if let Some(process_id) = body["data"]["process_id"].as_str() {
println!(
"✅ Created quick process for completed status test: {}",
process_id
);
// Wait for it to complete
tokio::time::sleep(tokio::time::Duration::from_millis(200)).await;
let status_url = format!("http://localhost:8000/commands/{}/status", process_id);
match client.get(&status_url).send().await {
Ok(response) => {
if response.status().is_success() {
match response.json::<serde_json::Value>().await {
Ok(status_body) => {
println!("✅ Got completed status response: {}", status_body);
if let Some(data) = status_body.get("data") {
if let Some(exit_code) = data.get("exit_code") {
println!("✅ Status includes exit code: {}", exit_code);
}
if let Some(success) = data.get("success") {
println!("✅ Status includes success flag: {}", success);
}
}
}
Err(e) => println!("❌ Failed to parse completed status JSON: {}", e),
}
} else {
println!(
"❌ Completed status API returned error: {}",
response.status()
);
}
}
Err(e) => println!("❌ Completed status API request failed: {}", e),
}
}
}
// Test 7c: Status of non-existent process (error handling)
let fake_id = "non-existent-process-id";
let status_url = format!("http://localhost:8000/commands/{}/status", fake_id);
match client.get(&status_url).send().await {
Ok(response) => {
if response.status() == reqwest::StatusCode::NOT_FOUND {
println!("✅ Status API correctly returns 404 for non-existent process");
} else {
println!(
"❌ Status API should return 404 for non-existent process, got: {}",
response.status()
);
}
}
Err(e) => println!("❌ Error testing non-existent process status: {}", e),
}
// Test 8: Server Streaming API Endpoint (TDD - These will FAIL initially)
println!("\n📡 Test 8: Server Streaming API Endpoint (TDD)");
// Create a process that generates output
let stream_command = serde_json::json!({
"command": "bash",
"args": ["-c", "for i in {1..3}; do echo \"Stream line $i\"; sleep 0.1; done"],
"working_dir": null,
"env_vars": [],
"stdin": null
});
let response = client
.post("http://localhost:8000/commands")
.json(&stream_command)
.send()
.await?;
if response.status().is_success() {
let body: serde_json::Value = response.json().await?;
if let Some(process_id) = body["data"]["process_id"].as_str() {
println!("✅ Created streaming process: {}", process_id);
// Test 8a: GET /commands/{id}/stream endpoint
let stream_url = format!("http://localhost:8000/commands/{}/stream", process_id);
match client.get(&stream_url).send().await {
Ok(response) => {
if response.status().is_success() {
println!("✅ Stream endpoint accessible");
if let Some(content_type) = response.headers().get("content-type") {
println!("✅ Content-Type: {:?}", content_type);
}
// Try to read the response body
match response.text().await {
Ok(text) => {
if !text.is_empty() {
println!("✅ Received streaming data: '{}'", text.trim());
} else {
println!("❌ No streaming data received (expected for now)");
}
}
Err(e) => println!("❌ Failed to read stream response: {}", e),
}
} else {
println!(
"❌ Stream endpoint returned error: {} (expected for now)",
response.status()
);
}
}
Err(e) => println!("❌ Stream API request failed (expected for now): {}", e),
}
// Clean up
let _ = client
.delete(format!("http://localhost:8000/commands/{}", process_id))
.send()
.await;
}
}
// Test 8b: Streaming from non-existent process
let fake_stream_url = format!("http://localhost:8000/commands/{}/stream", "fake-id");
match client.get(&fake_stream_url).send().await {
Ok(response) => {
if response.status() == reqwest::StatusCode::NOT_FOUND {
println!("✅ Stream API correctly returns 404 for non-existent process");
} else {
println!(
"❌ Stream API should return 404 for non-existent process, got: {}",
response.status()
);
}
}
Err(e) => println!("❌ Error testing non-existent process stream: {}", e),
}
// Test 9: True Chunk-Based Streaming Verification (Fixed)
println!("\n🌊 Test 9: True Chunk-Based Streaming Verification");
// Create a longer-running process to avoid timing issues
let stream_command = serde_json::json!({
"command": "bash",
"args": ["-c", "for i in {1..6}; do echo \"Chunk $i at $(date +%H:%M:%S.%3N)\"; sleep 0.5; done"],
"working_dir": null,
"env_vars": [],
"stdin": null
});
let response = client
.post("http://localhost:8000/commands")
.json(&stream_command)
.send()
.await?;
if response.status().is_success() {
let body: serde_json::Value = response.json().await?;
if let Some(process_id) = body["data"]["process_id"].as_str() {
println!(
"✅ Created streaming process: {} (will run ~3 seconds)",
process_id
);
// Test chunk-based streaming with the /stream endpoint
let stream_url = format!("http://localhost:8000/commands/{}/stream", process_id);
// Small delay to let the process start generating output
tokio::time::sleep(tokio::time::Duration::from_millis(100)).await;
let stream_response = client.get(&stream_url).send().await;
match stream_response {
Ok(response) => {
if response.status().is_success() {
println!("✅ Stream endpoint accessible");
let start_time = std::time::Instant::now();
println!("🔍 Reading streaming response:");
// Try to read the response in chunks using a simpler approach
let bytes = match tokio::time::timeout(
tokio::time::Duration::from_secs(4),
response.bytes(),
)
.await
{
Ok(Ok(bytes)) => bytes,
Ok(Err(e)) => {
println!(" ❌ Failed to read response: {}", e);
return Ok(());
}
Err(_) => {
println!(" ❌ Response read timeout");
return Ok(());
}
};
let response_text = String::from_utf8_lossy(&bytes);
let lines: Vec<&str> =
response_text.lines().filter(|l| !l.is_empty()).collect();
println!("📊 Response analysis:");
println!(" Total response size: {} bytes", bytes.len());
println!(" Number of lines: {}", lines.len());
println!(
" Read duration: {:.1}s",
start_time.elapsed().as_secs_f32()
);
if !lines.is_empty() {
println!(" Lines received:");
for (i, line) in lines.iter().enumerate() {
println!(" {}: '{}'", i + 1, line);
}
}
// The key insight: if we got multiple lines with different timestamps,
// it proves they were generated over time, even if delivered in one HTTP response
if lines.len() > 1 {
// Check if timestamps show progression
let first_line = lines[0];
let last_line = lines[lines.len() - 1];
if first_line != last_line {
println!("✅ STREAMING VERIFIED: {} lines with different content/timestamps!", lines.len());
println!(
" This proves the server captured streaming output over time"
);
if lines.len() >= 3 {
println!(" First: '{}'", first_line);
println!(" Last: '{}'", last_line);
}
} else {
println!(
"⚠️ Multiple identical lines - may indicate buffering issue"
);
}
} else if lines.len() == 1 {
println!("⚠️ Only 1 line received: '{}'", lines[0]);
println!(
" This suggests the process finished too quickly or timing issue"
);
} else {
println!("❌ No output lines received");
}
} else {
println!("❌ Stream endpoint error: {}", response.status());
}
}
Err(e) => println!("❌ Stream request failed: {}", e),
}
// Wait for process to complete, then verify final output
tokio::time::sleep(tokio::time::Duration::from_millis(500)).await;
println!("\n🔍 Verification: Testing completed process output:");
let stdout_url = format!("http://localhost:8000/commands/{}/stdout", process_id);
match client.get(&stdout_url).send().await {
Ok(response) if response.status().is_success() => {
if let Ok(text) = response.text().await {
let final_lines: Vec<&str> =
text.lines().filter(|l| !l.is_empty()).collect();
println!(
"✅ Final stdout: {} lines, {} bytes",
final_lines.len(),
text.len()
);
if final_lines.len() >= 6 {
println!(
"✅ Process completed successfully - all expected output captured"
);
} else {
println!(
"⚠️ Expected 6 lines, got {} - process may have been interrupted",
final_lines.len()
);
}
}
}
_ => println!("⚠️ Final stdout check failed"),
}
// Clean up
let _ = client
.delete(format!("http://localhost:8000/commands/{}", process_id))
.send()
.await;
}
}
println!("\n🎉 All TDD tests completed!");
println!("💡 Expected failures show what needs to be implemented:");
println!(" 📊 Remote status/wait methods");
println!(" 🌊 Real output streaming");
println!(" 🔍 GET /commands/:id/status endpoint");
println!(" 📡 GET /commands/:id/stream endpoint");
println!("🔧 Time to make the tests pass! 🚀");
Ok(())
}

View File

@@ -1,291 +0,0 @@
use async_trait::async_trait;
use serde::{Deserialize, Serialize};
use tokio::io::AsyncRead;
use crate::models::Environment;
mod local;
mod remote;
pub use local::LocalCommandExecutor;
pub use remote::RemoteCommandExecutor;
// Core trait that defines the interface for command execution
#[async_trait]
pub trait CommandExecutor: Send + Sync {
/// Start a process and return a handle to it
async fn start(
&self,
request: &CommandRunnerArgs,
) -> Result<Box<dyn ProcessHandle>, CommandError>;
}
// Trait for managing running processes
#[async_trait]
pub trait ProcessHandle: Send + Sync {
/// Check if the process is still running, return exit status if finished
async fn try_wait(&mut self) -> Result<Option<CommandExitStatus>, CommandError>;
/// Wait for the process to complete and return exit status
async fn wait(&mut self) -> Result<CommandExitStatus, CommandError>;
/// Kill the process
async fn kill(&mut self) -> Result<(), CommandError>;
/// Get streams for stdout and stderr
async fn stream(&mut self) -> Result<CommandStream, CommandError>;
/// Get process identifier (for debugging/logging)
fn process_id(&self) -> String;
/// Check current status (alias for try_wait for backward compatibility)
async fn status(&mut self) -> Result<Option<CommandExitStatus>, CommandError> {
self.try_wait().await
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct CommandRunnerArgs {
pub command: String,
pub args: Vec<String>,
pub working_dir: Option<String>,
pub env_vars: Vec<(String, String)>,
pub stdin: Option<String>,
}
pub struct CommandRunner {
executor: Box<dyn CommandExecutor>,
command: Option<String>,
args: Vec<String>,
working_dir: Option<String>,
env_vars: Vec<(String, String)>,
stdin: Option<String>,
}
impl Default for CommandRunner {
fn default() -> Self {
Self::new()
}
}
pub struct CommandProcess {
handle: Box<dyn ProcessHandle>,
}
impl std::fmt::Debug for CommandProcess {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("CommandProcess")
.field("process_id", &self.handle.process_id())
.finish()
}
}
#[derive(Debug)]
pub enum CommandError {
SpawnFailed {
command: String,
error: std::io::Error,
},
StatusCheckFailed {
error: std::io::Error,
},
KillFailed {
error: std::io::Error,
},
ProcessNotStarted,
NoCommandSet,
IoError {
error: std::io::Error,
},
}
impl From<std::io::Error> for CommandError {
fn from(error: std::io::Error) -> Self {
CommandError::IoError { error }
}
}
impl std::fmt::Display for CommandError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
CommandError::SpawnFailed { command, error } => {
write!(f, "Failed to spawn command '{}': {}", command, error)
}
CommandError::StatusCheckFailed { error } => {
write!(f, "Failed to check command status: {}", error)
}
CommandError::KillFailed { error } => {
write!(f, "Failed to kill command: {}", error)
}
CommandError::ProcessNotStarted => {
write!(f, "Process has not been started yet")
}
CommandError::NoCommandSet => {
write!(f, "No command has been set")
}
CommandError::IoError { error } => {
write!(f, "Failed to spawn command: {}", error)
}
}
}
}
impl std::error::Error for CommandError {}
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct CommandExitStatus {
/// Exit code (0 for success on most platforms)
code: Option<i32>,
/// Whether the process exited successfully
success: bool,
/// Unix signal that terminated the process (Unix only)
#[cfg(unix)]
signal: Option<i32>,
/// Optional remote process identifier for cloud execution
remote_process_id: Option<String>,
/// Optional session identifier for remote execution tracking
remote_session_id: Option<String>,
}
impl CommandExitStatus {
/// Returns true if the process exited successfully
pub fn success(&self) -> bool {
self.success
}
/// Returns the exit code of the process, if available
pub fn code(&self) -> Option<i32> {
self.code
}
}
pub struct CommandStream {
pub stdout: Option<Box<dyn AsyncRead + Unpin + Send>>,
pub stderr: Option<Box<dyn AsyncRead + Unpin + Send>>,
}
impl CommandRunner {
pub fn new() -> Self {
let env = std::env::var("ENVIRONMENT").unwrap_or_else(|_| "local".to_string());
let mode = env.parse().unwrap_or(Environment::Local);
match mode {
Environment::Cloud => CommandRunner {
executor: Box::new(RemoteCommandExecutor::new()),
command: None,
args: Vec::new(),
working_dir: None,
env_vars: Vec::new(),
stdin: None,
},
Environment::Local => CommandRunner {
executor: Box::new(LocalCommandExecutor::new()),
command: None,
args: Vec::new(),
working_dir: None,
env_vars: Vec::new(),
stdin: None,
},
}
}
pub fn command(&mut self, cmd: &str) -> &mut Self {
self.command = Some(cmd.to_string());
self
}
pub fn get_program(&self) -> &str {
self.command.as_deref().unwrap_or("")
}
pub fn get_args(&self) -> &[String] {
&self.args
}
pub fn get_current_dir(&self) -> Option<&str> {
self.working_dir.as_deref()
}
pub fn arg(&mut self, arg: &str) -> &mut Self {
self.args.push(arg.to_string());
self
}
pub fn stdin(&mut self, prompt: &str) -> &mut Self {
self.stdin = Some(prompt.to_string());
self
}
pub fn working_dir(&mut self, dir: &str) -> &mut Self {
self.working_dir = Some(dir.to_string());
self
}
pub fn env(&mut self, key: &str, val: &str) -> &mut Self {
self.env_vars.push((key.to_string(), val.to_string()));
self
}
/// Convert the current CommandRunner state to a CreateCommandRequest
pub fn to_args(&self) -> Option<CommandRunnerArgs> {
Some(CommandRunnerArgs {
command: self.command.clone()?,
args: self.args.clone(),
working_dir: self.working_dir.clone(),
env_vars: self.env_vars.clone(),
stdin: self.stdin.clone(),
})
}
/// Create a CommandRunner from a CreateCommandRequest, respecting the environment
#[allow(dead_code)]
pub fn from_args(request: CommandRunnerArgs) -> Self {
let mut runner = Self::new();
runner.command(&request.command);
for arg in &request.args {
runner.arg(arg);
}
if let Some(dir) = &request.working_dir {
runner.working_dir(dir);
}
for (key, value) in &request.env_vars {
runner.env(key, value);
}
if let Some(stdin) = &request.stdin {
runner.stdin(stdin);
}
runner
}
pub async fn start(&self) -> Result<CommandProcess, CommandError> {
let request = self.to_args().ok_or(CommandError::NoCommandSet)?;
let handle = self.executor.start(&request).await?;
Ok(CommandProcess { handle })
}
}
impl CommandProcess {
#[allow(dead_code)]
pub async fn status(&mut self) -> Result<Option<CommandExitStatus>, CommandError> {
self.handle.status().await
}
pub async fn try_wait(&mut self) -> Result<Option<CommandExitStatus>, CommandError> {
self.handle.try_wait().await
}
pub async fn kill(&mut self) -> Result<(), CommandError> {
self.handle.kill().await
}
pub async fn stream(&mut self) -> Result<CommandStream, CommandError> {
self.handle.stream().await
}
#[allow(dead_code)]
pub async fn wait(&mut self) -> Result<CommandExitStatus, CommandError> {
self.handle.wait().await
}
}

View File

@@ -1,703 +0,0 @@
use std::{process::Stdio, time::Duration};
use async_trait::async_trait;
use command_group::{AsyncCommandGroup, AsyncGroupChild};
#[cfg(unix)]
use nix::{
sys::signal::{killpg, Signal},
unistd::{getpgid, Pid},
};
use tokio::process::Command;
use crate::command_runner::{
CommandError, CommandExecutor, CommandExitStatus, CommandRunnerArgs, CommandStream,
ProcessHandle,
};
pub struct LocalCommandExecutor;
impl Default for LocalCommandExecutor {
fn default() -> Self {
Self::new()
}
}
impl LocalCommandExecutor {
pub fn new() -> Self {
Self
}
}
#[async_trait]
impl CommandExecutor for LocalCommandExecutor {
async fn start(
&self,
request: &CommandRunnerArgs,
) -> Result<Box<dyn ProcessHandle>, CommandError> {
let mut cmd = Command::new(&request.command);
cmd.args(&request.args)
.kill_on_drop(true)
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::piped());
if let Some(dir) = &request.working_dir {
cmd.current_dir(dir);
}
for (key, val) in &request.env_vars {
cmd.env(key, val);
}
let mut child = cmd.group_spawn().map_err(|e| CommandError::SpawnFailed {
command: format!("{} {}", request.command, request.args.join(" ")),
error: e,
})?;
if let Some(prompt) = &request.stdin {
// Write prompt to stdin safely
if let Some(mut stdin) = child.inner().stdin.take() {
use tokio::io::AsyncWriteExt;
stdin.write_all(prompt.as_bytes()).await?;
stdin.shutdown().await?;
}
}
Ok(Box::new(LocalProcessHandle::new(child)))
}
}
pub struct LocalProcessHandle {
child: Option<AsyncGroupChild>,
process_id: String,
}
impl LocalProcessHandle {
pub fn new(mut child: AsyncGroupChild) -> Self {
let process_id = child
.inner()
.id()
.map(|id| id.to_string())
.unwrap_or_else(|| "unknown".to_string());
Self {
child: Some(child),
process_id,
}
}
}
#[async_trait]
impl ProcessHandle for LocalProcessHandle {
async fn try_wait(&mut self) -> Result<Option<CommandExitStatus>, CommandError> {
match &mut self.child {
Some(child) => match child
.inner()
.try_wait()
.map_err(|e| CommandError::StatusCheckFailed { error: e })?
{
Some(status) => Ok(Some(CommandExitStatus::from_local(status))),
None => Ok(None),
},
None => Err(CommandError::ProcessNotStarted),
}
}
async fn wait(&mut self) -> Result<CommandExitStatus, CommandError> {
match &mut self.child {
Some(child) => {
let status = child
.wait()
.await
.map_err(|e| CommandError::KillFailed { error: e })?;
Ok(CommandExitStatus::from_local(status))
}
None => Err(CommandError::ProcessNotStarted),
}
}
async fn kill(&mut self) -> Result<(), CommandError> {
match &mut self.child {
Some(child) => {
// hit the whole process group, not just the leader
#[cfg(unix)]
{
if let Some(pid) = child.inner().id() {
let pgid = getpgid(Some(Pid::from_raw(pid as i32))).map_err(|e| {
CommandError::KillFailed {
error: std::io::Error::other(e),
}
})?;
for sig in [Signal::SIGINT, Signal::SIGTERM, Signal::SIGKILL] {
if let Err(e) = killpg(pgid, sig) {
tracing::warn!(
"Failed to send signal {:?} to process group {}: {}",
sig,
pgid,
e
);
}
tokio::time::sleep(Duration::from_secs(2)).await;
if child
.inner()
.try_wait()
.map_err(|e| CommandError::StatusCheckFailed { error: e })?
.is_some()
{
break; // gone!
}
}
}
}
// final fallback command_group already targets the group
child
.kill()
.await
.map_err(|e| CommandError::KillFailed { error: e })?;
child
.wait()
.await
.map_err(|e| CommandError::KillFailed { error: e })?; // reap
// Clear the handle after successful kill
self.child = None;
Ok(())
}
None => Err(CommandError::ProcessNotStarted),
}
}
async fn stream(&mut self) -> Result<CommandStream, CommandError> {
match &mut self.child {
Some(child) => {
let stdout = child.inner().stdout.take();
let stderr = child.inner().stderr.take();
Ok(CommandStream::from_local(stdout, stderr))
}
None => Err(CommandError::ProcessNotStarted),
}
}
fn process_id(&self) -> String {
self.process_id.clone()
}
}
// Local-specific implementations for shared types
impl CommandExitStatus {
/// Create a CommandExitStatus from a std::process::ExitStatus (for local processes)
pub fn from_local(status: std::process::ExitStatus) -> Self {
Self {
code: status.code(),
success: status.success(),
#[cfg(unix)]
signal: {
use std::os::unix::process::ExitStatusExt;
status.signal()
},
remote_process_id: None,
remote_session_id: None,
}
}
}
impl CommandStream {
/// Create a CommandStream from local process streams
pub fn from_local(
stdout: Option<tokio::process::ChildStdout>,
stderr: Option<tokio::process::ChildStderr>,
) -> Self {
Self {
stdout: stdout.map(|s| Box::new(s) as Box<dyn tokio::io::AsyncRead + Unpin + Send>),
stderr: stderr.map(|s| Box::new(s) as Box<dyn tokio::io::AsyncRead + Unpin + Send>),
}
}
}
#[cfg(test)]
mod tests {
use std::process::Stdio;
use command_group::{AsyncCommandGroup, AsyncGroupChild};
use tokio::{
io::{AsyncReadExt, AsyncWriteExt},
process::Command,
};
use crate::command_runner::*;
// Helper function to create a comparison tokio::process::Command
async fn create_tokio_command(
cmd: &str,
args: &[&str],
working_dir: Option<&str>,
env_vars: &[(String, String)],
stdin_data: Option<&str>,
) -> Result<AsyncGroupChild, std::io::Error> {
let mut command = Command::new(cmd);
command
.args(args)
.kill_on_drop(true)
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::piped());
if let Some(dir) = working_dir {
command.current_dir(dir);
}
for (key, val) in env_vars {
command.env(key, val);
}
let mut child = command.group_spawn()?;
// Write stdin data if provided
if let Some(data) = stdin_data {
if let Some(mut stdin) = child.inner().stdin.take() {
stdin.write_all(data.as_bytes()).await?;
stdin.shutdown().await?;
}
}
Ok(child)
}
#[tokio::test]
async fn test_command_execution_comparison() {
// Ensure we're using local execution for this test
std::env::set_var("ENVIRONMENT", "local");
let test_message = "hello world";
// Test with CommandRunner
let mut runner = CommandRunner::new();
let mut process = runner
.command("echo")
.arg(test_message)
.start()
.await
.expect("CommandRunner should start echo command");
let mut stream = process.stream().await.expect("Should get stream");
let mut stdout_data = Vec::new();
if let Some(stdout) = &mut stream.stdout {
stdout
.read_to_end(&mut stdout_data)
.await
.expect("Should read stdout");
}
let runner_output = String::from_utf8(stdout_data).expect("Should be valid UTF-8");
// Test with tokio::process::Command
let mut tokio_child = create_tokio_command("echo", &[test_message], None, &[], None)
.await
.expect("Should start tokio command");
let mut tokio_stdout_data = Vec::new();
if let Some(stdout) = tokio_child.inner().stdout.take() {
let mut stdout = stdout;
stdout
.read_to_end(&mut tokio_stdout_data)
.await
.expect("Should read tokio stdout");
}
let tokio_output = String::from_utf8(tokio_stdout_data).expect("Should be valid UTF-8");
// Both should produce the same output
assert_eq!(runner_output.trim(), tokio_output.trim());
assert_eq!(runner_output.trim(), test_message);
}
#[tokio::test]
async fn test_stdin_handling() {
// Ensure we're using local execution for this test
std::env::set_var("ENVIRONMENT", "local");
let test_input = "test input data\n";
// Test with CommandRunner (using cat to echo stdin)
let mut runner = CommandRunner::new();
let mut process = runner
.command("cat")
.stdin(test_input)
.start()
.await
.expect("CommandRunner should start cat command");
let mut stream = process.stream().await.expect("Should get stream");
let mut stdout_data = Vec::new();
if let Some(stdout) = &mut stream.stdout {
stdout
.read_to_end(&mut stdout_data)
.await
.expect("Should read stdout");
}
let runner_output = String::from_utf8(stdout_data).expect("Should be valid UTF-8");
// Test with tokio::process::Command
let mut tokio_child = create_tokio_command("cat", &[], None, &[], Some(test_input))
.await
.expect("Should start tokio command");
let mut tokio_stdout_data = Vec::new();
if let Some(stdout) = tokio_child.inner().stdout.take() {
let mut stdout = stdout;
stdout
.read_to_end(&mut tokio_stdout_data)
.await
.expect("Should read tokio stdout");
}
let tokio_output = String::from_utf8(tokio_stdout_data).expect("Should be valid UTF-8");
// Both should echo the input
assert_eq!(runner_output, tokio_output);
assert_eq!(runner_output, test_input);
}
#[tokio::test]
async fn test_working_directory() {
// Use pwd command to check working directory
let test_dir = "/tmp";
// Test with CommandRunner
std::env::set_var("ENVIRONMENT", "local");
let mut runner = CommandRunner::new();
let mut process = runner
.command("pwd")
.working_dir(test_dir)
.start()
.await
.expect("CommandRunner should start pwd command");
let mut stream = process.stream().await.expect("Should get stream");
let mut stdout_data = Vec::new();
if let Some(stdout) = &mut stream.stdout {
stdout
.read_to_end(&mut stdout_data)
.await
.expect("Should read stdout");
}
let runner_output = String::from_utf8(stdout_data).expect("Should be valid UTF-8");
// Test with tokio::process::Command
let mut tokio_child = create_tokio_command("pwd", &[], Some(test_dir), &[], None)
.await
.expect("Should start tokio command");
let mut tokio_stdout_data = Vec::new();
if let Some(stdout) = tokio_child.inner().stdout.take() {
let mut stdout = stdout;
stdout
.read_to_end(&mut tokio_stdout_data)
.await
.expect("Should read tokio stdout");
}
let tokio_output = String::from_utf8(tokio_stdout_data).expect("Should be valid UTF-8");
// Both should show the same working directory
assert_eq!(runner_output.trim(), tokio_output.trim());
assert!(runner_output.trim().contains(test_dir));
}
#[tokio::test]
async fn test_environment_variables() {
let test_var = "TEST_VAR";
let test_value = "test_value_123";
// Test with CommandRunner
std::env::set_var("ENVIRONMENT", "local");
let mut runner = CommandRunner::new();
let mut process = runner
.command("printenv")
.arg(test_var)
.env(test_var, test_value)
.start()
.await
.expect("CommandRunner should start printenv command");
let mut stream = process.stream().await.expect("Should get stream");
let mut stdout_data = Vec::new();
if let Some(stdout) = &mut stream.stdout {
stdout
.read_to_end(&mut stdout_data)
.await
.expect("Should read stdout");
}
let runner_output = String::from_utf8(stdout_data).expect("Should be valid UTF-8");
// Test with tokio::process::Command
let env_vars = vec![(test_var.to_string(), test_value.to_string())];
let mut tokio_child = create_tokio_command("printenv", &[test_var], None, &env_vars, None)
.await
.expect("Should start tokio command");
let mut tokio_stdout_data = Vec::new();
if let Some(stdout) = tokio_child.inner().stdout.take() {
let mut stdout = stdout;
stdout
.read_to_end(&mut tokio_stdout_data)
.await
.expect("Should read tokio stdout");
}
let tokio_output = String::from_utf8(tokio_stdout_data).expect("Should be valid UTF-8");
// Both should show the same environment variable
assert_eq!(runner_output.trim(), tokio_output.trim());
assert_eq!(runner_output.trim(), test_value);
}
#[tokio::test]
async fn test_process_group_creation() {
// Test that both CommandRunner and tokio::process::Command create process groups
// We'll use a sleep command that can be easily killed
// Test with CommandRunner
std::env::set_var("ENVIRONMENT", "local");
let mut runner = CommandRunner::new();
let mut process = runner
.command("sleep")
.arg("10") // Sleep for 10 seconds
.start()
.await
.expect("CommandRunner should start sleep command");
// Check that process is running
let status = process.status().await.expect("Should check status");
assert!(status.is_none(), "Process should still be running");
// Kill the process (might fail if already exited)
let _ = process.kill().await;
// Wait a moment for the kill to take effect
tokio::time::sleep(tokio::time::Duration::from_millis(100)).await;
let final_status = process.status().await.expect("Should check final status");
assert!(
final_status.is_some(),
"Process should have exited after kill"
);
// Test with tokio::process::Command for comparison
let mut tokio_child = create_tokio_command("sleep", &["10"], None, &[], None)
.await
.expect("Should start tokio sleep command");
// Check that process is running
let tokio_status = tokio_child
.inner()
.try_wait()
.expect("Should check tokio status");
assert!(
tokio_status.is_none(),
"Tokio process should still be running"
);
// Kill the tokio process
tokio_child.kill().await.expect("Should kill tokio process");
// Wait a moment for the kill to take effect
tokio::time::sleep(tokio::time::Duration::from_millis(100)).await;
let tokio_final_status = tokio_child
.inner()
.try_wait()
.expect("Should check tokio final status");
assert!(
tokio_final_status.is_some(),
"Tokio process should have exited after kill"
);
}
#[tokio::test]
async fn test_kill_operation() {
// Test killing processes with both implementations
// Test CommandRunner kill
std::env::set_var("ENVIRONMENT", "local");
let mut runner = CommandRunner::new();
let mut process = runner
.command("sleep")
.arg("60") // Long sleep
.start()
.await
.expect("Should start CommandRunner sleep");
// Verify it's running
assert!(process
.status()
.await
.expect("Should check status")
.is_none());
// Kill and verify it stops (might fail if already exited)
let _ = process.kill().await;
// Give it time to die
tokio::time::sleep(tokio::time::Duration::from_millis(200)).await;
let exit_status = process.status().await.expect("Should get exit status");
assert!(exit_status.is_some(), "Process should have exited");
// Test tokio::process::Command kill for comparison
let mut tokio_child = create_tokio_command("sleep", &["60"], None, &[], None)
.await
.expect("Should start tokio sleep");
// Verify it's running
assert!(tokio_child
.inner()
.try_wait()
.expect("Should check tokio status")
.is_none());
// Kill and verify it stops
tokio_child.kill().await.expect("Should kill tokio process");
// Give it time to die
tokio::time::sleep(tokio::time::Duration::from_millis(200)).await;
let tokio_exit_status = tokio_child
.inner()
.try_wait()
.expect("Should get tokio exit status");
assert!(
tokio_exit_status.is_some(),
"Tokio process should have exited"
);
}
#[tokio::test]
async fn test_status_monitoring() {
// Test status monitoring with a quick command
// Test with CommandRunner
std::env::set_var("ENVIRONMENT", "local");
let mut runner = CommandRunner::new();
let mut process = runner
.command("echo")
.arg("quick test")
.start()
.await
.expect("Should start CommandRunner echo");
// Initially might be running or might have finished quickly
let _initial_status = process.status().await.expect("Should check initial status");
// Wait for completion
let exit_status = process.wait().await.expect("Should wait for completion");
assert!(exit_status.success(), "Echo command should succeed");
// After wait, status should show completion
let final_status = process.status().await.expect("Should check final status");
assert!(
final_status.is_some(),
"Should have exit status after completion"
);
assert!(
final_status.unwrap().success(),
"Should show successful exit"
);
// Test with tokio::process::Command for comparison
let mut tokio_child = create_tokio_command("echo", &["quick test"], None, &[], None)
.await
.expect("Should start tokio echo");
// Wait for completion
let tokio_exit_status = tokio_child
.wait()
.await
.expect("Should wait for tokio completion");
assert!(
tokio_exit_status.success(),
"Tokio echo command should succeed"
);
// After wait, status should show completion
let tokio_final_status = tokio_child
.inner()
.try_wait()
.expect("Should check tokio final status");
assert!(
tokio_final_status.is_some(),
"Should have tokio exit status after completion"
);
assert!(
tokio_final_status.unwrap().success(),
"Should show tokio successful exit"
);
}
#[tokio::test]
async fn test_wait_for_completion() {
// Test waiting for process completion with specific exit codes
// Test successful command (exit code 0)
std::env::set_var("ENVIRONMENT", "local");
let mut runner = CommandRunner::new();
let mut process = runner
.command("true") // Command that exits with 0
.start()
.await
.expect("Should start true command");
let exit_status = process
.wait()
.await
.expect("Should wait for true completion");
assert!(exit_status.success(), "true command should succeed");
assert_eq!(exit_status.code(), Some(0), "true should exit with code 0");
// Test failing command (exit code 1)
let mut runner2 = CommandRunner::new();
let mut process2 = runner2
.command("false") // Command that exits with 1
.start()
.await
.expect("Should start false command");
let exit_status2 = process2
.wait()
.await
.expect("Should wait for false completion");
assert!(!exit_status2.success(), "false command should fail");
assert_eq!(
exit_status2.code(),
Some(1),
"false should exit with code 1"
);
// Compare with tokio::process::Command
let mut tokio_child = create_tokio_command("true", &[], None, &[], None)
.await
.expect("Should start tokio true");
let tokio_exit_status = tokio_child
.wait()
.await
.expect("Should wait for tokio true");
assert!(tokio_exit_status.success(), "tokio true should succeed");
assert_eq!(
tokio_exit_status.code(),
Some(0),
"tokio true should exit with code 0"
);
let mut tokio_child2 = create_tokio_command("false", &[], None, &[], None)
.await
.expect("Should start tokio false");
let tokio_exit_status2 = tokio_child2
.wait()
.await
.expect("Should wait for tokio false");
assert!(!tokio_exit_status2.success(), "tokio false should fail");
assert_eq!(
tokio_exit_status2.code(),
Some(1),
"tokio false should exit with code 1"
);
}
}

View File

@@ -1,402 +0,0 @@
use std::{
pin::Pin,
task::{Context, Poll},
};
use async_trait::async_trait;
use tokio::io::AsyncRead;
use crate::command_runner::{
CommandError, CommandExecutor, CommandExitStatus, CommandRunnerArgs, CommandStream,
ProcessHandle,
};
pub struct RemoteCommandExecutor {
cloud_server_url: String,
}
impl Default for RemoteCommandExecutor {
fn default() -> Self {
Self::new()
}
}
impl RemoteCommandExecutor {
pub fn new() -> Self {
let cloud_server_url = std::env::var("CLOUD_SERVER_URL")
.unwrap_or_else(|_| "http://localhost:8000".to_string());
Self { cloud_server_url }
}
}
#[async_trait]
impl CommandExecutor for RemoteCommandExecutor {
async fn start(
&self,
request: &CommandRunnerArgs,
) -> Result<Box<dyn ProcessHandle>, CommandError> {
let client = reqwest::Client::new();
let response = client
.post(format!("{}/commands", self.cloud_server_url))
.json(request)
.send()
.await
.map_err(|e| CommandError::IoError {
error: std::io::Error::other(e),
})?;
let result: serde_json::Value =
response.json().await.map_err(|e| CommandError::IoError {
error: std::io::Error::other(e),
})?;
let process_id =
result["data"]["process_id"]
.as_str()
.ok_or_else(|| CommandError::IoError {
error: std::io::Error::other(format!(
"Missing process_id in response: {}",
result
)),
})?;
Ok(Box::new(RemoteProcessHandle::new(
process_id.to_string(),
self.cloud_server_url.clone(),
)))
}
}
pub struct RemoteProcessHandle {
process_id: String,
cloud_server_url: String,
}
impl RemoteProcessHandle {
pub fn new(process_id: String, cloud_server_url: String) -> Self {
Self {
process_id,
cloud_server_url,
}
}
}
#[async_trait]
impl ProcessHandle for RemoteProcessHandle {
async fn try_wait(&mut self) -> Result<Option<CommandExitStatus>, CommandError> {
// Make HTTP request to get status from cloud server
let client = reqwest::Client::new();
let response = client
.get(format!(
"{}/commands/{}/status",
self.cloud_server_url, self.process_id
))
.send()
.await
.map_err(|e| CommandError::StatusCheckFailed {
error: std::io::Error::other(e),
})?;
if !response.status().is_success() {
if response.status() == reqwest::StatusCode::NOT_FOUND {
return Err(CommandError::StatusCheckFailed {
error: std::io::Error::new(std::io::ErrorKind::NotFound, "Process not found"),
});
} else {
return Err(CommandError::StatusCheckFailed {
error: std::io::Error::other("Status check failed"),
});
}
}
let result: serde_json::Value =
response
.json()
.await
.map_err(|e| CommandError::StatusCheckFailed {
error: std::io::Error::other(e),
})?;
let data = result["data"]
.as_object()
.ok_or_else(|| CommandError::StatusCheckFailed {
error: std::io::Error::other("Invalid response format"),
})?;
let running = data["running"].as_bool().unwrap_or(false);
if running {
Ok(None) // Still running
} else {
// Process completed, extract exit status
let exit_code = data["exit_code"].as_i64().map(|c| c as i32);
let success = data["success"].as_bool().unwrap_or(false);
Ok(Some(CommandExitStatus::from_remote(
exit_code,
success,
Some(self.process_id.clone()),
None,
)))
}
}
async fn wait(&mut self) -> Result<CommandExitStatus, CommandError> {
// Poll the status endpoint until process completes
loop {
let client = reqwest::Client::new();
let response = client
.get(format!(
"{}/commands/{}/status",
self.cloud_server_url, self.process_id
))
.send()
.await
.map_err(|e| CommandError::StatusCheckFailed {
error: std::io::Error::other(e),
})?;
if !response.status().is_success() {
if response.status() == reqwest::StatusCode::NOT_FOUND {
return Err(CommandError::StatusCheckFailed {
error: std::io::Error::new(
std::io::ErrorKind::NotFound,
"Process not found",
),
});
} else {
return Err(CommandError::StatusCheckFailed {
error: std::io::Error::other("Status check failed"),
});
}
}
let result: serde_json::Value =
response
.json()
.await
.map_err(|e| CommandError::StatusCheckFailed {
error: std::io::Error::other(e),
})?;
let data =
result["data"]
.as_object()
.ok_or_else(|| CommandError::StatusCheckFailed {
error: std::io::Error::other("Invalid response format"),
})?;
let running = data["running"].as_bool().unwrap_or(false);
if !running {
// Process completed, extract exit status and return
let exit_code = data["exit_code"].as_i64().map(|c| c as i32);
let success = data["success"].as_bool().unwrap_or(false);
return Ok(CommandExitStatus::from_remote(
exit_code,
success,
Some(self.process_id.clone()),
None,
));
}
// Wait a bit before polling again
tokio::time::sleep(tokio::time::Duration::from_millis(20)).await;
}
}
async fn kill(&mut self) -> Result<(), CommandError> {
let client = reqwest::Client::new();
let response = client
.delete(format!(
"{}/commands/{}",
self.cloud_server_url, self.process_id
))
.send()
.await
.map_err(|e| CommandError::KillFailed {
error: std::io::Error::other(e),
})?;
if !response.status().is_success() {
if response.status() == reqwest::StatusCode::NOT_FOUND {
// Process not found, might have already finished - treat as success
return Ok(());
}
return Err(CommandError::KillFailed {
error: std::io::Error::other(format!(
"Remote kill failed with status: {}",
response.status()
)),
});
}
// Check if server indicates process was already completed
if let Ok(result) = response.json::<serde_json::Value>().await {
if let Some(data) = result.get("data") {
if let Some(message) = data.as_str() {
tracing::info!("Kill result: {}", message);
}
}
}
Ok(())
}
async fn stream(&mut self) -> Result<CommandStream, CommandError> {
// Create HTTP streams for stdout and stderr concurrently
let stdout_url = format!(
"{}/commands/{}/stdout",
self.cloud_server_url, self.process_id
);
let stderr_url = format!(
"{}/commands/{}/stderr",
self.cloud_server_url, self.process_id
);
// Create both streams concurrently using tokio::try_join!
let (stdout_result, stderr_result) =
tokio::try_join!(HTTPStream::new(stdout_url), HTTPStream::new(stderr_url))?;
let stdout_stream: Option<Box<dyn AsyncRead + Unpin + Send>> =
Some(Box::new(stdout_result) as Box<dyn AsyncRead + Unpin + Send>);
let stderr_stream: Option<Box<dyn AsyncRead + Unpin + Send>> =
Some(Box::new(stderr_result) as Box<dyn AsyncRead + Unpin + Send>);
Ok(CommandStream {
stdout: stdout_stream,
stderr: stderr_stream,
})
}
fn process_id(&self) -> String {
self.process_id.clone()
}
}
/// HTTP-based AsyncRead wrapper for true streaming
pub struct HTTPStream {
stream: Pin<Box<dyn futures_util::Stream<Item = Result<Vec<u8>, reqwest::Error>> + Send>>,
current_chunk: Vec<u8>,
chunk_position: usize,
finished: bool,
}
// HTTPStream needs to be Unpin to work with the AsyncRead trait bounds
impl Unpin for HTTPStream {}
impl HTTPStream {
pub async fn new(url: String) -> Result<Self, CommandError> {
let client = reqwest::Client::new();
let response = client
.get(&url)
.send()
.await
.map_err(|e| CommandError::IoError {
error: std::io::Error::other(e),
})?;
if !response.status().is_success() {
return Err(CommandError::IoError {
error: std::io::Error::other(format!(
"HTTP request failed with status: {}",
response.status()
)),
});
}
// Use chunk() method to create a stream
Ok(Self {
stream: Box::pin(futures_util::stream::unfold(
response,
|mut resp| async move {
match resp.chunk().await {
Ok(Some(chunk)) => Some((Ok(chunk.to_vec()), resp)),
Ok(None) => None,
Err(e) => Some((Err(e), resp)),
}
},
)),
current_chunk: Vec::new(),
chunk_position: 0,
finished: false,
})
}
}
impl AsyncRead for HTTPStream {
fn poll_read(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
buf: &mut tokio::io::ReadBuf<'_>,
) -> Poll<std::io::Result<()>> {
if self.finished {
return Poll::Ready(Ok(()));
}
// First, try to read from current chunk if available
if self.chunk_position < self.current_chunk.len() {
let remaining_in_chunk = self.current_chunk.len() - self.chunk_position;
let to_read = std::cmp::min(remaining_in_chunk, buf.remaining());
let chunk_data =
&self.current_chunk[self.chunk_position..self.chunk_position + to_read];
buf.put_slice(chunk_data);
self.chunk_position += to_read;
return Poll::Ready(Ok(()));
}
// Current chunk is exhausted, try to get the next chunk
match self.stream.as_mut().poll_next(cx) {
Poll::Ready(Some(Ok(chunk))) => {
if chunk.is_empty() {
// Empty chunk, mark as finished
self.finished = true;
Poll::Ready(Ok(()))
} else {
// New chunk available
self.current_chunk = chunk;
self.chunk_position = 0;
// Read from the new chunk
let to_read = std::cmp::min(self.current_chunk.len(), buf.remaining());
let chunk_data = &self.current_chunk[..to_read];
buf.put_slice(chunk_data);
self.chunk_position = to_read;
Poll::Ready(Ok(()))
}
}
Poll::Ready(Some(Err(e))) => Poll::Ready(Err(std::io::Error::other(e))),
Poll::Ready(None) => {
// Stream ended
self.finished = true;
Poll::Ready(Ok(()))
}
Poll::Pending => Poll::Pending,
}
}
}
// Remote-specific implementations for shared types
impl CommandExitStatus {
/// Create a CommandExitStatus for remote processes
pub fn from_remote(
code: Option<i32>,
success: bool,
remote_process_id: Option<String>,
remote_session_id: Option<String>,
) -> Self {
Self {
code,
success,
#[cfg(unix)]
signal: None,
remote_process_id,
remote_session_id,
}
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,935 +0,0 @@
use async_trait::async_trait;
use serde_json::Value;
use tokio::io::{AsyncBufReadExt, BufReader};
use uuid::Uuid;
use crate::{
command_runner::{CommandProcess, CommandRunner},
executor::{
ActionType, Executor, ExecutorError, NormalizedConversation, NormalizedEntry,
NormalizedEntryType,
},
models::{
execution_process::ExecutionProcess, executor_session::ExecutorSession, task::Task,
task_attempt::TaskAttempt,
},
utils::{path::make_path_relative, shell::get_shell_command},
};
// Sub-modules for utilities
pub mod filter;
use self::filter::{parse_session_id_from_line, AiderFilter};
/// State for tracking diff blocks (SEARCH/REPLACE patterns)
#[derive(Debug, Clone)]
struct DiffBlockState {
/// Current mode: None, InSearch, InReplace
mode: DiffMode,
/// Accumulated content for the current diff block
content: Vec<String>,
/// Start timestamp for the diff block
start_timestamp: Option<chrono::DateTime<chrono::Utc>>,
/// Buffered line that might be a file name
buffered_line: Option<String>,
/// File name associated with current diff block
current_file: Option<String>,
}
#[derive(Debug, Clone, PartialEq)]
enum DiffMode {
None,
InSearch,
InReplace,
}
impl Default for DiffBlockState {
fn default() -> Self {
Self {
mode: DiffMode::None,
content: Vec::new(),
start_timestamp: None,
buffered_line: None,
current_file: None,
}
}
}
struct Content {
pub stdout: Option<String>,
pub stderr: Option<String>,
}
/// Process a single line for session extraction and content formatting
async fn process_line_for_content(
line: &str,
session_extracted: &mut bool,
diff_state: &mut DiffBlockState,
worktree_path: &str,
pool: &sqlx::SqlitePool,
execution_process_id: uuid::Uuid,
) -> Option<Content> {
if !*session_extracted {
if let Some(session_id) = parse_session_id_from_line(line) {
if let Err(e) =
ExecutorSession::update_session_id(pool, execution_process_id, &session_id).await
{
tracing::error!(
"Failed to update session ID for execution process {}: {}",
execution_process_id,
e
);
} else {
tracing::info!(
"Updated session ID {} for execution process {}",
session_id,
execution_process_id
);
*session_extracted = true;
}
// Don't return any content for session lines
return None;
}
}
// Filter out noise completely
if AiderFilter::is_noise(line) {
return None;
}
// Filter out user input echo
if AiderFilter::is_user_input(line) {
return None;
}
// Handle diff block markers (SEARCH/REPLACE patterns)
if AiderFilter::is_diff_block_marker(line) {
let trimmed = line.trim();
match trimmed {
"<<<<<<< SEARCH" => {
// If we have a buffered line, it's the file name for this diff
if let Some(buffered) = diff_state.buffered_line.take() {
diff_state.current_file = Some(buffered);
}
diff_state.mode = DiffMode::InSearch;
diff_state.content.clear();
diff_state.start_timestamp = Some(chrono::Utc::now());
return None; // Don't output individual markers
}
"=======" => {
if diff_state.mode == DiffMode::InSearch {
diff_state.mode = DiffMode::InReplace;
return None; // Don't output individual markers
}
}
">>>>>>> REPLACE" => {
if diff_state.mode == DiffMode::InReplace {
// End of diff block - create atomic edit action
let diff_content = diff_state.content.join("\n");
let formatted = format_diff_as_normalized_json(
&diff_content,
diff_state.current_file.as_deref(),
diff_state.start_timestamp,
worktree_path,
);
// Reset state
diff_state.mode = DiffMode::None;
diff_state.content.clear();
diff_state.start_timestamp = None;
diff_state.current_file = None;
return Some(Content {
stdout: Some(formatted),
stderr: None,
});
}
}
_ => {}
}
return None;
}
// If we're inside a diff block, accumulate content
if diff_state.mode != DiffMode::None {
diff_state.content.push(line.to_string());
return None; // Don't output individual lines within diff blocks
}
// Check if we have a buffered line from previous call
let mut result = None;
if let Some(buffered) = diff_state.buffered_line.take() {
// Output the buffered line as a normal message since current line is not a diff marker
let formatted = format_aider_content_as_normalized_json(&buffered, worktree_path);
result = Some(Content {
stdout: Some(formatted),
stderr: None,
});
}
// Check if line is a system message
if AiderFilter::is_system_message(line) {
// Apply scanning repo progress simplification for system messages
let processed_line = if AiderFilter::is_scanning_repo_progress(line) {
AiderFilter::simplify_scanning_repo_message(line)
} else {
line.to_string()
};
let formatted = format_aider_content_as_normalized_json(&processed_line, worktree_path);
// If we had a buffered line, we need to handle both outputs
if result.is_some() {
// For now, prioritize the current system message and drop the buffered one
// TODO: In a real implementation, we might want to queue both
}
return Some(Content {
stdout: Some(formatted),
stderr: None,
});
}
// Check if line is an error
if AiderFilter::is_error(line) {
let formatted = format_aider_content_as_normalized_json(line, worktree_path);
// If we had a buffered line, prioritize the error
return Some(Content {
stdout: result.and_then(|r| r.stdout),
stderr: Some(formatted),
});
}
// Regular assistant message - buffer it in case next line is a diff marker
let trimmed = line.trim();
if !trimmed.is_empty() {
diff_state.buffered_line = Some(line.to_string());
}
// Return any previously buffered content
result
}
/// Stream stdout and stderr from Aider process with filtering
pub async fn stream_aider_stdout_stderr_to_db(
stdout: impl tokio::io::AsyncRead + Unpin + Send + 'static,
stderr: impl tokio::io::AsyncRead + Unpin + Send + 'static,
pool: sqlx::SqlitePool,
attempt_id: Uuid,
execution_process_id: Uuid,
worktree_path: String,
) {
let stdout_task = {
let pool = pool.clone();
let worktree_path = worktree_path.clone();
tokio::spawn(async move {
let mut reader = BufReader::new(stdout);
let mut line = String::new();
let mut session_extracted = false;
let mut diff_state = DiffBlockState::default();
loop {
line.clear();
match reader.read_line(&mut line).await {
Ok(0) => break, // EOF
Ok(_) => {
line = line.trim_end_matches(['\r', '\n']).to_string();
let content = process_line_for_content(
&line,
&mut session_extracted,
&mut diff_state,
&worktree_path,
&pool,
execution_process_id,
)
.await;
if let Some(Content { stdout, stderr }) = content {
if let Err(e) = ExecutionProcess::append_output(
&pool,
execution_process_id,
stdout.as_deref(),
stderr.as_deref(),
)
.await
{
tracing::error!(
"Failed to write Aider stdout line for attempt {}: {}",
attempt_id,
e
);
}
}
}
Err(e) => {
tracing::error!("Error reading stdout for attempt {}: {}", attempt_id, e);
break;
}
}
}
// Flush any remaining buffered content
if let Some(Content { stdout, stderr }) =
flush_buffered_content(&mut diff_state, &worktree_path)
{
if let Err(e) = ExecutionProcess::append_output(
&pool,
execution_process_id,
stdout.as_deref(),
stderr.as_deref(),
)
.await
{
tracing::error!(
"Failed to write Aider buffered stdout line for attempt {}: {}",
attempt_id,
e
);
}
}
})
};
let stderr_task = {
let pool = pool.clone();
let worktree_path = worktree_path.clone();
tokio::spawn(async move {
let mut reader = BufReader::new(stderr);
let mut line = String::new();
loop {
line.clear();
match reader.read_line(&mut line).await {
Ok(0) => break, // EOF
Ok(_) => {
let trimmed = line.trim_end_matches(['\r', '\n']);
// Apply filtering to stderr - filter out noise like "Scanning repo" progress
if !trimmed.trim().is_empty() && !AiderFilter::is_noise(trimmed) {
let formatted =
format_aider_content_as_normalized_json(trimmed, &worktree_path);
if let Err(e) = ExecutionProcess::append_output(
&pool,
execution_process_id,
None, // No stdout content from stderr
Some(&formatted),
)
.await
{
tracing::error!(
"Failed to write Aider stderr line for attempt {}: {}",
attempt_id,
e
);
}
}
}
Err(e) => {
tracing::error!("Error reading stderr for attempt {}: {}", attempt_id, e);
break;
}
}
}
})
};
// Wait for both tasks to complete
let _ = tokio::join!(stdout_task, stderr_task);
}
/// Format diff content as a normalized JSON entry for atomic edit actions
fn format_diff_as_normalized_json(
_content: &str,
file_name: Option<&str>,
start_timestamp: Option<chrono::DateTime<chrono::Utc>>,
worktree_path: &str,
) -> String {
let timestamp = start_timestamp.unwrap_or_else(chrono::Utc::now);
let timestamp_str = timestamp.to_rfc3339_opts(chrono::SecondsFormat::Micros, true);
let raw_path = file_name.unwrap_or("multiple_files").to_string();
// Normalize the path to be relative to worktree root (matching git diff format)
let path = make_path_relative(&raw_path, worktree_path);
let normalized_entry = NormalizedEntry {
timestamp: Some(timestamp_str),
entry_type: NormalizedEntryType::ToolUse {
tool_name: "edit".to_string(),
action_type: ActionType::FileWrite { path: path.clone() },
},
content: format!("`{}`", path),
metadata: None,
};
serde_json::to_string(&normalized_entry).unwrap() + "\n"
}
/// Flush any remaining buffered content when stream ends
fn flush_buffered_content(diff_state: &mut DiffBlockState, worktree_path: &str) -> Option<Content> {
if let Some(buffered) = diff_state.buffered_line.take() {
let formatted = format_aider_content_as_normalized_json(&buffered, worktree_path);
Some(Content {
stdout: Some(formatted),
stderr: None,
})
} else {
None
}
}
/// Format Aider content as normalized JSON entries for direct database storage
pub fn format_aider_content_as_normalized_json(content: &str, _worktree_path: &str) -> String {
let mut results = Vec::new();
let base_timestamp = chrono::Utc::now();
let mut entry_counter = 0u32;
for line in content.lines() {
let trimmed = line.trim();
if trimmed.is_empty() {
continue;
}
// Generate unique timestamp for each entry by adding microseconds
let unique_timestamp =
base_timestamp + chrono::Duration::microseconds(entry_counter as i64);
let timestamp_str = unique_timestamp.to_rfc3339_opts(chrono::SecondsFormat::Micros, true);
entry_counter += 1;
// Try to parse as existing JSON first
if let Ok(parsed_json) = serde_json::from_str::<Value>(trimmed) {
results.push(parsed_json.to_string());
continue;
}
if trimmed.is_empty() {
continue;
}
// Check message type and create appropriate normalized entry
let normalized_entry = if AiderFilter::is_system_message(trimmed) {
NormalizedEntry {
timestamp: Some(timestamp_str),
entry_type: NormalizedEntryType::SystemMessage,
content: trimmed.to_string(),
metadata: None,
}
} else if AiderFilter::is_error(trimmed) {
NormalizedEntry {
timestamp: Some(timestamp_str),
entry_type: NormalizedEntryType::ErrorMessage,
content: trimmed.to_string(),
metadata: None,
}
} else {
// Regular assistant message
NormalizedEntry {
timestamp: Some(timestamp_str),
entry_type: NormalizedEntryType::AssistantMessage,
content: trimmed.to_string(),
metadata: None,
}
};
results.push(serde_json::to_string(&normalized_entry).unwrap());
}
// Ensure each JSON entry is on its own line
results.join("\n") + "\n"
}
/// An executor that uses Aider CLI to process tasks
pub struct AiderExecutor {
executor_type: String,
command: String,
}
impl Default for AiderExecutor {
fn default() -> Self {
Self::new()
}
}
impl AiderExecutor {
/// Create a new AiderExecutor with default settings
pub fn new() -> Self {
Self {
executor_type: "Aider".to_string(),
command: "aider . --yes-always --no-show-model-warnings --skip-sanity-check-repo --no-stream --no-fancy-input".to_string(),
}
}
}
#[async_trait]
impl Executor for AiderExecutor {
async fn spawn(
&self,
pool: &sqlx::SqlitePool,
task_id: Uuid,
worktree_path: &str,
) -> Result<CommandProcess, ExecutorError> {
// Get the task to fetch its description
let task = Task::find_by_id(pool, task_id)
.await?
.ok_or(ExecutorError::TaskNotFound)?;
let prompt = if let Some(task_description) = task.description {
format!("{}\n{}", task.title, task_description)
} else {
task.title.to_string()
};
// Create temporary message file
let base_dir = TaskAttempt::get_worktree_base_dir();
let sessions_dir = base_dir.join("aider").join("aider-messages");
if let Err(e) = tokio::fs::create_dir_all(&sessions_dir).await {
tracing::warn!(
"Failed to create temp message directory {}: {}",
sessions_dir.display(),
e
);
}
let message_file = sessions_dir.join(format!("task_{}.md", task_id));
// Generate our own session ID and store it in the database immediately
let session_id = format!("aider_task_{}", task_id);
// Create session directory and chat history file for session persistence
let session_dir = base_dir.join("aider").join("aider-sessions");
if let Err(e) = tokio::fs::create_dir_all(&session_dir).await {
tracing::warn!(
"Failed to create session directory {}: {}",
session_dir.display(),
e
);
}
let chat_file = session_dir.join(format!("{}.md", session_id));
// Use shell command for cross-platform compatibility
let (shell_cmd, shell_arg) = get_shell_command();
let aider_command = format!(
"{} --chat-history-file {} --message-file {}",
&self.command,
chat_file.to_string_lossy(),
message_file.to_string_lossy()
);
// Write message file after command is prepared for better error context
tokio::fs::write(&message_file, prompt.as_bytes())
.await
.map_err(|e| {
ExecutorError::ContextCollectionFailed(format!(
"Failed to write message file {}: {}",
message_file.display(),
e
))
})?;
tracing::debug!("Spawning Aider command: {}", &aider_command);
let mut command = CommandRunner::new();
command
.command(shell_cmd)
.arg(shell_arg)
.arg(&aider_command)
.working_dir(worktree_path)
.env("COLUMNS", "1000"); // Prevent line wrapping in aider output
let child = command.start().await.map_err(|e| {
crate::executor::SpawnContext::from_command(&command, &self.executor_type)
.with_task(task_id, Some(task.title.clone()))
.with_context(format!("{} CLI execution for new task", self.executor_type))
.spawn_error(e)
})?;
tracing::debug!(
"Started Aider with message file {} for task {}: {:?}",
message_file.display(),
task_id,
prompt
);
Ok(child)
}
/// Execute with Aider filtering for stdout and stderr
async fn execute_streaming(
&self,
pool: &sqlx::SqlitePool,
task_id: Uuid,
attempt_id: Uuid,
execution_process_id: Uuid,
worktree_path: &str,
) -> Result<CommandProcess, ExecutorError> {
// Generate our own session ID and store it in the database immediately
let session_id = format!("aider_task_{}", task_id);
if let Err(e) =
ExecutorSession::update_session_id(pool, execution_process_id, &session_id).await
{
tracing::error!(
"Failed to update session ID for execution process {}: {}",
execution_process_id,
e
);
} else {
tracing::info!(
"Set session ID {} for execution process {}",
session_id,
execution_process_id
);
}
let mut child = self.spawn(pool, task_id, worktree_path).await?;
// Take stdout and stderr pipes for Aider filtering
let streams = child
.stream()
.await
.expect("Failed to get stdio from child process");
let stdout = streams
.stdout
.expect("Failed to take stdout from child process");
let stderr = streams
.stderr
.expect("Failed to take stderr from child process");
// Start Aider filtering task
let pool_clone = pool.clone();
let worktree_path_clone = worktree_path.to_string();
tokio::spawn(stream_aider_stdout_stderr_to_db(
stdout,
stderr,
pool_clone,
attempt_id,
execution_process_id,
worktree_path_clone,
));
Ok(child)
}
fn normalize_logs(
&self,
logs: &str,
_worktree_path: &str,
) -> Result<NormalizedConversation, String> {
let mut entries = Vec::new();
for line in logs.lines() {
let trimmed = line.trim();
if trimmed.is_empty() {
continue;
}
// Simple passthrough: directly deserialize normalized JSON entries
if let Ok(entry) = serde_json::from_str::<NormalizedEntry>(trimmed) {
entries.push(entry);
}
}
Ok(NormalizedConversation {
entries,
session_id: None, // Session ID is stored directly in the database
executor_type: "aider".to_string(),
prompt: None,
summary: None,
})
}
/// Execute follow-up with Aider filtering for stdout and stderr
async fn execute_followup_streaming(
&self,
pool: &sqlx::SqlitePool,
task_id: Uuid,
attempt_id: Uuid,
execution_process_id: Uuid,
session_id: &str,
prompt: &str,
worktree_path: &str,
) -> Result<CommandProcess, ExecutorError> {
// Update session ID for this execution process to ensure continuity
if let Err(e) =
ExecutorSession::update_session_id(pool, execution_process_id, session_id).await
{
tracing::error!(
"Failed to update session ID for followup execution process {}: {}",
execution_process_id,
e
);
} else {
tracing::info!(
"Updated session ID {} for followup execution process {}",
session_id,
execution_process_id
);
}
let mut child = self
.spawn_followup(pool, task_id, session_id, prompt, worktree_path)
.await?;
// Take stdout and stderr pipes for Aider filtering
let streams = child
.stream()
.await
.expect("Failed to get stdio from child process");
let stdout = streams
.stdout
.expect("Failed to take stdout from child process");
let stderr = streams
.stderr
.expect("Failed to take stderr from child process");
// Start Aider filtering task
let pool_clone = pool.clone();
let worktree_path_clone = worktree_path.to_string();
tokio::spawn(stream_aider_stdout_stderr_to_db(
stdout,
stderr,
pool_clone,
attempt_id,
execution_process_id,
worktree_path_clone,
));
Ok(child)
}
async fn spawn_followup(
&self,
_pool: &sqlx::SqlitePool,
_task_id: Uuid,
session_id: &str,
prompt: &str,
worktree_path: &str,
) -> Result<CommandProcess, ExecutorError> {
let base_dir = TaskAttempt::get_worktree_base_dir();
// Create session directory if it doesn't exist
let session_dir = base_dir.join("aider").join("aider-sessions");
if let Err(e) = tokio::fs::create_dir_all(&session_dir).await {
tracing::warn!(
"Failed to create session directory {}: {}",
session_dir.display(),
e
);
}
let chat_file = session_dir.join(format!("{}.md", session_id));
// Create temporary message file for the followup prompt
let sessions_dir = base_dir.join("aider").join("aider-messages");
if let Err(e) = tokio::fs::create_dir_all(&sessions_dir).await {
tracing::warn!(
"Failed to create temp message directory {}: {}",
sessions_dir.display(),
e
);
}
let message_file = sessions_dir.join(format!("followup_{}.md", session_id));
// Use shell command for cross-platform compatibility
let (shell_cmd, shell_arg) = get_shell_command();
let aider_command = format!(
"{} --restore-chat-history --chat-history-file {} --message-file {}",
self.command,
chat_file.to_string_lossy(),
message_file.to_string_lossy()
);
// Write message file after command is prepared for better error context
tokio::fs::write(&message_file, prompt.as_bytes())
.await
.map_err(|e| {
ExecutorError::ContextCollectionFailed(format!(
"Failed to write followup message file {}: {}",
message_file.display(),
e
))
})?;
tracing::debug!("Spawning Aider command: {}", &aider_command);
let mut command = CommandRunner::new();
command
.command(shell_cmd)
.arg(shell_arg)
.arg(&aider_command)
.working_dir(worktree_path)
.env("COLUMNS", "1000"); // Prevent line wrapping in aider output
let child = command.start().await.map_err(|e| {
crate::executor::SpawnContext::from_command(&command, &self.executor_type)
.with_context(format!(
"{} CLI followup execution for session {}",
self.executor_type, session_id
))
.spawn_error(e)
})?;
tracing::debug!(
"Started Aider followup with message file {} and chat history {} for session {}: {:?}",
message_file.display(),
chat_file.display(),
session_id,
prompt
);
Ok(child)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::executors::aider::{format_aider_content_as_normalized_json, AiderExecutor};
#[test]
fn test_normalize_logs_with_database_format() {
let executor = AiderExecutor::new();
// This is what the database should contain after our streaming function processes it
let logs = r#"{"timestamp":"2025-07-21T18:04:00Z","entry_type":{"type":"system_message"},"content":"Main model: anthropic/claude-sonnet-4-20250514","metadata":null}
{"timestamp":"2025-07-21T18:04:01Z","entry_type":{"type":"assistant_message"},"content":"I'll help you with this task.","metadata":null}
{"timestamp":"2025-07-21T18:04:02Z","entry_type":{"type":"error_message"},"content":"Error: File not found","metadata":null}
{"timestamp":"2025-07-21T18:04:03Z","entry_type":{"type":"assistant_message"},"content":"Let me try a different approach.","metadata":null}"#;
let result = executor.normalize_logs(logs, "/path/to/repo").unwrap();
assert_eq!(result.entries.len(), 4);
// First entry: system message
assert!(matches!(
result.entries[0].entry_type,
crate::executor::NormalizedEntryType::SystemMessage
));
assert!(result.entries[0].content.contains("Main model:"));
assert!(result.entries[0].timestamp.is_some());
// Second entry: assistant message
assert!(matches!(
result.entries[1].entry_type,
crate::executor::NormalizedEntryType::AssistantMessage
));
assert!(result.entries[1]
.content
.contains("help you with this task"));
// Third entry: error message
assert!(matches!(
result.entries[2].entry_type,
crate::executor::NormalizedEntryType::ErrorMessage
));
assert!(result.entries[2].content.contains("File not found"));
// Fourth entry: assistant message
assert!(matches!(
result.entries[3].entry_type,
crate::executor::NormalizedEntryType::AssistantMessage
));
assert!(result.entries[3].content.contains("different approach"));
}
#[test]
fn test_format_aider_content_as_normalized_json() {
let content = r#"Main model: anthropic/claude-sonnet-4-20250514
I'll help you implement this feature.
Error: Could not access file
Let me try a different approach."#;
let result = format_aider_content_as_normalized_json(content, "/path/to/repo");
let lines: Vec<&str> = result
.split('\n')
.filter(|line| !line.trim().is_empty())
.collect();
// Should have 4 entries (1 system + 2 assistant + 1 error)
assert_eq!(lines.len(), 4);
// Parse all entries and verify unique timestamps
let mut timestamps = Vec::new();
for line in &lines {
let json: serde_json::Value = serde_json::from_str(line).unwrap();
let timestamp = json["timestamp"].as_str().unwrap().to_string();
timestamps.push(timestamp);
}
// Verify all timestamps are unique (no duplicates)
let mut unique_timestamps = timestamps.clone();
unique_timestamps.sort();
unique_timestamps.dedup();
assert_eq!(
timestamps.len(),
unique_timestamps.len(),
"All timestamps should be unique"
);
// Parse the first line (should be system message)
let first_json: serde_json::Value = serde_json::from_str(lines[0]).unwrap();
assert_eq!(first_json["entry_type"]["type"], "system_message");
assert!(first_json["content"]
.as_str()
.unwrap()
.contains("Main model:"));
// Parse the second line (should be assistant message)
let second_json: serde_json::Value = serde_json::from_str(lines[1]).unwrap();
assert_eq!(second_json["entry_type"]["type"], "assistant_message");
assert!(second_json["content"]
.as_str()
.unwrap()
.contains("help you implement"));
// Parse the third line (should be error message)
let third_json: serde_json::Value = serde_json::from_str(lines[2]).unwrap();
assert_eq!(third_json["entry_type"]["type"], "error_message");
assert!(third_json["content"]
.as_str()
.unwrap()
.contains("Could not access"));
// Verify timestamps include microseconds for uniqueness
for timestamp in timestamps {
assert!(
timestamp.contains('.'),
"Timestamp should include microseconds: {}",
timestamp
);
}
}
#[test]
fn test_normalize_logs_edge_cases() {
let executor = AiderExecutor::new();
// Empty content
let result = executor.normalize_logs("", "/tmp").unwrap();
assert_eq!(result.entries.len(), 0);
// Only whitespace
let result = executor.normalize_logs(" \n\t\n ", "/tmp").unwrap();
assert_eq!(result.entries.len(), 0);
// Malformed JSON (current implementation skips invalid JSON)
let malformed = r#"{"timestamp":"2025-07-21T18:04:00Z","content":"incomplete"#;
let result = executor.normalize_logs(malformed, "/tmp").unwrap();
assert_eq!(result.entries.len(), 0); // Current implementation skips invalid JSON
// Mixed valid and invalid JSON
let mixed = r#"{"timestamp":"2025-07-21T18:04:00Z","entry_type":{"type":"assistant_message"},"content":"Valid entry","metadata":null}
Invalid line that's not JSON
{"timestamp":"2025-07-21T18:04:01Z","entry_type":{"type":"system_message"},"content":"Another valid entry","metadata":null}"#;
let result = executor.normalize_logs(mixed, "/tmp").unwrap();
assert_eq!(result.entries.len(), 2); // Only valid JSON entries are parsed
}
}

View File

@@ -1,269 +0,0 @@
use lazy_static::lazy_static;
use regex::Regex;
lazy_static! {
static ref AIDER_SESSION_REGEX: Regex = Regex::new(r".*\b(chat|session|sessionID|id)=([^ ]+)").unwrap();
static ref SYSTEM_MESSAGE_REGEX: Regex = Regex::new(r"^(Main model:|Weak model:)").unwrap();
static ref ERROR_MESSAGE_REGEX: Regex = Regex::new(r"^(Error:|ERROR:|Warning:|WARN:|Exception:|Fatal:|FATAL:|✗|❌|\[ERROR\])").unwrap();
static ref USER_INPUT_REGEX: Regex = Regex::new(r"^>\s+").unwrap();
static ref NOISE_REGEX: Regex = Regex::new(r"^(\s*$|Warning: Input is not a terminal|\[\[?\d+;\d+R|─{5,}|\s*\d+%\||Added .* to|You can skip|System:|Aider:|Git repo:.*|Repo-map:|>|▶|\[SYSTEM\]|Scanning repo:|Initial repo scan|Tokens:|Using [a-zA-Z0-9_.-]+ model with API key from environment|Restored previous conversation history.|.*\.git/worktrees/.*)").unwrap();
static ref SCANNING_REPO_PROGRESS_REGEX: Regex = Regex::new(r"^Scanning repo:\s+\d+%\|.*\|\s*\d+/\d+\s+\[.*\]").unwrap();
static ref DIFF_BLOCK_MARKERS: Regex = Regex::new(r"^(<<<<<<< SEARCH|=======|>>>>>>> REPLACE)$").unwrap();
}
/// Filter for Aider CLI output
pub struct AiderFilter;
impl AiderFilter {
/// Check if a line is a system message
pub fn is_system_message(line: &str) -> bool {
let trimmed = line.trim();
SYSTEM_MESSAGE_REGEX.is_match(trimmed)
}
/// Check if a line is an error message
pub fn is_error(line: &str) -> bool {
let trimmed = line.trim();
ERROR_MESSAGE_REGEX.is_match(trimmed)
}
/// Check if a line is noise that should be filtered out
pub fn is_noise(line: &str) -> bool {
let trimmed = line.trim();
NOISE_REGEX.is_match(trimmed)
}
/// Check if a line is user input (echo from stdin)
pub fn is_user_input(line: &str) -> bool {
let trimmed = line.trim();
USER_INPUT_REGEX.is_match(trimmed)
}
/// Check if a line is a scanning repo progress message that should be simplified
pub fn is_scanning_repo_progress(line: &str) -> bool {
let trimmed = line.trim();
SCANNING_REPO_PROGRESS_REGEX.is_match(trimmed)
}
/// Check if a line is a diff block marker (SEARCH/REPLACE blocks)
pub fn is_diff_block_marker(line: &str) -> bool {
let trimmed = line.trim();
DIFF_BLOCK_MARKERS.is_match(trimmed)
}
/// Simplify scanning repo progress to just "Scanning repo"
pub fn simplify_scanning_repo_message(line: &str) -> String {
if Self::is_scanning_repo_progress(line) {
"Scanning repo".to_string()
} else {
line.to_string()
}
}
}
/// Parse session_id from Aider output lines
pub fn parse_session_id_from_line(line: &str) -> Option<String> {
// Try regex for session ID extraction from various patterns
if let Some(captures) = AIDER_SESSION_REGEX.captures(line) {
if let Some(id) = captures.get(2) {
return Some(id.as_str().to_string());
}
}
None
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_is_system_message() {
// Only "Main model:" and "Weak model:" are system messages
assert!(AiderFilter::is_system_message(
"Main model: anthropic/claude-sonnet-4-20250514"
));
assert!(AiderFilter::is_system_message(
"Weak model: anthropic/claude-3-5-haiku-20241022"
));
// Everything else is not a system message
assert!(!AiderFilter::is_system_message("System: Starting new chat"));
assert!(!AiderFilter::is_system_message("Git repo:"));
assert!(!AiderFilter::is_system_message(
"Git repo: ../vibe-kanban/.git/worktrees/vk-
ing-fix with 280 files"
));
assert!(!AiderFilter::is_system_message(
"Using sonnet model with API key from environment"
));
assert!(!AiderFilter::is_system_message(
"I'll help you implement this"
));
assert!(!AiderFilter::is_system_message(
"Error: something went wrong"
));
assert!(!AiderFilter::is_system_message(""));
}
#[test]
fn test_is_noise() {
// Test that complete Git repo lines are treated as noise
assert!(AiderFilter::is_noise(
"Git repo: ../vibe-kanban/.git/worktrees/vk-streaming-fix with 280 files"
));
assert!(AiderFilter::is_noise("Git repo:"));
assert!(AiderFilter::is_noise(
"Using sonnet model with API key from environment"
));
assert!(AiderFilter::is_noise("System: Starting new chat"));
assert!(AiderFilter::is_noise("Aider: Ready to help"));
assert!(AiderFilter::is_noise(
"Repo-map: using 4096 tokens, auto refresh"
));
// Test non-noise messages
assert!(!AiderFilter::is_noise(
"Main model: anthropic/claude-sonnet-4"
));
assert!(!AiderFilter::is_noise("I'll help you implement this"));
assert!(!AiderFilter::is_noise("Error: something went wrong"));
}
#[test]
fn test_is_error() {
// Test error message detection
assert!(AiderFilter::is_error("Error: File not found"));
assert!(AiderFilter::is_error("ERROR: Permission denied"));
assert!(AiderFilter::is_error("Warning: Deprecated function"));
assert!(AiderFilter::is_error("WARN: Configuration issue"));
assert!(AiderFilter::is_error("Exception: Invalid input"));
assert!(AiderFilter::is_error("Fatal: Cannot continue"));
assert!(AiderFilter::is_error("FATAL: System failure"));
assert!(AiderFilter::is_error("✗ Command failed"));
assert!(AiderFilter::is_error("❌ Task not completed"));
assert!(AiderFilter::is_error("[ERROR] Operation failed"));
assert!(AiderFilter::is_error(" Error: Starting with spaces "));
// Test non-error messages
assert!(!AiderFilter::is_error("I'll help you with this"));
assert!(!AiderFilter::is_error("System: Starting chat"));
assert!(!AiderFilter::is_error("Regular message"));
assert!(!AiderFilter::is_error(""));
}
#[test]
fn test_parse_session_id_from_line() {
// Test session ID extraction from various formats
assert_eq!(
parse_session_id_from_line("Starting chat=ses_abc123 new session"),
Some("ses_abc123".to_string())
);
assert_eq!(
parse_session_id_from_line("Aider session=aider_session_456"),
Some("aider_session_456".to_string())
);
assert_eq!(
parse_session_id_from_line("DEBUG sessionID=debug_789 process"),
Some("debug_789".to_string())
);
assert_eq!(
parse_session_id_from_line("Session id=simple_id started"),
Some("simple_id".to_string())
);
// Test no session ID
assert_eq!(parse_session_id_from_line("No session here"), None);
assert_eq!(parse_session_id_from_line(""), None);
assert_eq!(parse_session_id_from_line("session= empty"), None);
}
#[test]
fn test_message_classification_priority() {
// Error messages are not system messages
assert!(AiderFilter::is_error("Error: System configuration invalid"));
assert!(!AiderFilter::is_system_message(
"Error: System configuration invalid"
));
// System messages are not errors
assert!(AiderFilter::is_system_message(
"Main model: anthropic/claude-sonnet-4"
));
assert!(!AiderFilter::is_error(
"Main model: anthropic/claude-sonnet-4"
));
}
#[test]
fn test_scanning_repo_progress_detection() {
// Test scanning repo progress detection
assert!(AiderFilter::is_scanning_repo_progress(
"Scanning repo: 0%| | 0/275 [00:00<?, ?it/s]"
));
assert!(AiderFilter::is_scanning_repo_progress(
"Scanning repo: 34%|███▍ | 94/275 [00:00<00:00, 931.21it/s]"
));
assert!(AiderFilter::is_scanning_repo_progress(
"Scanning repo: 68%|██████▊ | 188/275 [00:01<00:00, 150.45it/s]"
));
assert!(AiderFilter::is_scanning_repo_progress(
"Scanning repo: 100%|██████████| 275/275 [00:01<00:00, 151.76it/s]"
));
// Test non-progress messages
assert!(!AiderFilter::is_scanning_repo_progress(
"Scanning repo: Starting"
));
assert!(!AiderFilter::is_scanning_repo_progress(
"Initial repo scan can be slow"
));
assert!(!AiderFilter::is_scanning_repo_progress("Regular message"));
assert!(!AiderFilter::is_scanning_repo_progress(""));
}
#[test]
fn test_diff_block_marker_detection() {
// Test diff block markers
assert!(AiderFilter::is_diff_block_marker("<<<<<<< SEARCH"));
assert!(AiderFilter::is_diff_block_marker("======="));
assert!(AiderFilter::is_diff_block_marker(">>>>>>> REPLACE"));
// Test non-markers
assert!(!AiderFilter::is_diff_block_marker("Regular code line"));
assert!(!AiderFilter::is_diff_block_marker("def function():"));
assert!(!AiderFilter::is_diff_block_marker(""));
assert!(!AiderFilter::is_diff_block_marker("< SEARCH")); // Missing full marker
}
#[test]
fn test_simplify_scanning_repo_message() {
// Test simplification of progress messages
assert_eq!(
AiderFilter::simplify_scanning_repo_message(
"Scanning repo: 0%| | 0/275 [00:00<?, ?it/s]"
),
"Scanning repo"
);
assert_eq!(
AiderFilter::simplify_scanning_repo_message(
"Scanning repo: 100%|██████████| 275/275 [00:01<00:00, 151.76it/s]"
),
"Scanning repo"
);
// Test non-progress messages (should remain unchanged)
assert_eq!(
AiderFilter::simplify_scanning_repo_message("Regular message"),
"Regular message"
);
assert_eq!(
AiderFilter::simplify_scanning_repo_message("Scanning repo: Starting"),
"Scanning repo: Starting"
);
}
}

View File

@@ -1,658 +0,0 @@
use std::path::Path;
use async_trait::async_trait;
use serde::{Deserialize, Serialize};
use uuid::Uuid;
use crate::{
command_runner::{CommandProcess, CommandRunner},
executor,
executor::{
ActionType, Executor, ExecutorError, NormalizedConversation, NormalizedEntry,
NormalizedEntryType,
},
models::task::Task,
utils::shell::get_shell_command,
};
/// An executor that uses Amp to process tasks
pub struct AmpExecutor;
#[derive(Deserialize, Serialize, Debug, Clone, PartialEq, Eq)]
#[serde(tag = "type")]
pub enum AmpJson {
#[serde(rename = "messages")]
Messages {
messages: Vec<(usize, AmpMessage)>,
#[serde(rename = "toolResults")]
tool_results: Vec<serde_json::Value>,
},
#[serde(rename = "initial")]
Initial {
#[serde(rename = "threadID")]
thread_id: Option<String>,
},
#[serde(rename = "token-usage")]
TokenUsage(serde_json::Value),
#[serde(rename = "state")]
State { state: String },
#[serde(rename = "shutdown")]
Shutdown,
#[serde(rename = "tool-status")]
ToolStatus(serde_json::Value),
}
#[derive(Deserialize, Serialize, Debug, Clone, PartialEq, Eq)]
pub struct AmpMessage {
pub role: String,
pub content: Vec<AmpContentItem>,
pub state: Option<serde_json::Value>,
pub meta: Option<AmpMeta>,
}
#[derive(Deserialize, Serialize, Debug, Clone, PartialEq, Eq)]
pub struct AmpMeta {
#[serde(rename = "sentAt")]
pub sent_at: u64,
}
#[derive(Deserialize, Serialize, Debug, Clone, PartialEq, Eq)]
#[serde(tag = "type")]
pub enum AmpContentItem {
#[serde(rename = "text")]
Text { text: String },
#[serde(rename = "thinking")]
Thinking { thinking: String },
#[serde(rename = "tool_use")]
ToolUse {
id: String,
name: String,
input: serde_json::Value,
},
#[serde(rename = "tool_result")]
ToolResult {
#[serde(rename = "toolUseID")]
tool_use_id: String,
run: serde_json::Value,
},
}
impl AmpJson {
pub fn should_process(&self) -> bool {
matches!(self, AmpJson::Messages { .. })
}
pub fn extract_session_id(&self) -> Option<String> {
match self {
AmpJson::Initial { thread_id } => thread_id.clone(),
_ => None,
}
}
pub fn has_streaming_content(&self) -> bool {
match self {
AmpJson::Messages { messages, .. } => messages.iter().any(|(_index, message)| {
if let Some(state) = &message.state {
if let Some(state_type) = state.get("type").and_then(|t| t.as_str()) {
state_type == "streaming"
} else {
false
}
} else {
false
}
}),
_ => false,
}
}
pub fn to_normalized_entries(
&self,
executor: &AmpExecutor,
worktree_path: &str,
) -> Vec<NormalizedEntry> {
match self {
AmpJson::Messages { messages, .. } => {
if self.has_streaming_content() {
return vec![];
}
let mut entries = Vec::new();
for (_index, message) in messages {
let role = &message.role;
for content_item in &message.content {
if let Some(entry) =
content_item.to_normalized_entry(role, message, executor, worktree_path)
{
entries.push(entry);
}
}
}
entries
}
_ => vec![],
}
}
}
impl AmpContentItem {
pub fn to_normalized_entry(
&self,
role: &str,
message: &AmpMessage,
executor: &AmpExecutor,
worktree_path: &str,
) -> Option<NormalizedEntry> {
use serde_json::Value;
let timestamp = message.meta.as_ref().map(|meta| meta.sent_at.to_string());
match self {
AmpContentItem::Text { text } => {
let entry_type = match role {
"user" => NormalizedEntryType::UserMessage,
"assistant" => NormalizedEntryType::AssistantMessage,
_ => return None,
};
Some(NormalizedEntry {
timestamp,
entry_type,
content: text.clone(),
metadata: Some(serde_json::to_value(self).unwrap_or(Value::Null)),
})
}
AmpContentItem::Thinking { thinking } => Some(NormalizedEntry {
timestamp,
entry_type: NormalizedEntryType::Thinking,
content: thinking.clone(),
metadata: Some(serde_json::to_value(self).unwrap_or(Value::Null)),
}),
AmpContentItem::ToolUse { name, input, .. } => {
let action_type = executor.extract_action_type(name, input, worktree_path);
let content =
executor.generate_concise_content(name, input, &action_type, worktree_path);
Some(NormalizedEntry {
timestamp,
entry_type: NormalizedEntryType::ToolUse {
tool_name: name.clone(),
action_type,
},
content,
metadata: Some(serde_json::to_value(self).unwrap_or(Value::Null)),
})
}
AmpContentItem::ToolResult { .. } => None,
}
}
}
#[async_trait]
impl Executor for AmpExecutor {
async fn spawn(
&self,
pool: &sqlx::SqlitePool,
task_id: Uuid,
worktree_path: &str,
) -> Result<CommandProcess, ExecutorError> {
// Get the task to fetch its description
let task = Task::find_by_id(pool, task_id)
.await?
.ok_or(ExecutorError::TaskNotFound)?;
let prompt = if let Some(task_description) = task.description {
format!(
r#"project_id: {}
Task title: {}
Task description: {}"#,
task.project_id, task.title, task_description
)
} else {
format!(
r#"project_id: {}
Task title: {}"#,
task.project_id, task.title
)
};
// Use shell command for cross-platform compatibility
let (shell_cmd, shell_arg) = get_shell_command();
// --format=jsonl is deprecated in latest versions of Amp CLI
let amp_command = "npx @sourcegraph/amp@0.0.1752148945-gd8844f --format=jsonl";
let mut command = CommandRunner::new();
command
.command(shell_cmd)
.arg(shell_arg)
.arg(amp_command)
.stdin(&prompt)
.working_dir(worktree_path);
let proc = command.start().await.map_err(|e| {
executor::SpawnContext::from_command(&command, "Amp")
.with_task(task_id, Some(task.title.clone()))
.with_context("Amp CLI execution for new task")
.spawn_error(e)
})?;
Ok(proc)
}
async fn spawn_followup(
&self,
_pool: &sqlx::SqlitePool,
_task_id: Uuid,
session_id: &str,
prompt: &str,
worktree_path: &str,
) -> Result<CommandProcess, ExecutorError> {
// Use shell command for cross-platform compatibility
let (shell_cmd, shell_arg) = get_shell_command();
let amp_command = format!(
"npx @sourcegraph/amp@0.0.1752148945-gd8844f threads continue {} --format=jsonl",
session_id
);
let mut command = CommandRunner::new();
command
.command(shell_cmd)
.arg(shell_arg)
.arg(&amp_command)
.stdin(prompt)
.working_dir(worktree_path);
let proc = command.start().await.map_err(|e| {
crate::executor::SpawnContext::from_command(&command, "Amp")
.with_context(format!(
"Amp CLI followup execution for thread {}",
session_id
))
.spawn_error(e)
})?;
Ok(proc)
}
fn normalize_logs(
&self,
logs: &str,
worktree_path: &str,
) -> Result<NormalizedConversation, String> {
let mut entries = Vec::new();
let mut session_id = None;
for line in logs.lines() {
let trimmed = line.trim();
if trimmed.is_empty() {
continue;
}
// Try to parse as AmpMessage
let amp_message: AmpJson = match serde_json::from_str(trimmed) {
Ok(msg) => msg,
Err(_) => {
// If line isn't valid JSON, add it as raw text
entries.push(NormalizedEntry {
timestamp: None,
entry_type: NormalizedEntryType::SystemMessage,
content: format!("Raw output: {}", trimmed),
metadata: None,
});
continue;
}
};
// Extract session ID if available
if session_id.is_none() {
if let Some(id) = amp_message.extract_session_id() {
session_id = Some(id);
}
}
// Process the message if it's a type we care about
if amp_message.should_process() {
let new_entries = amp_message.to_normalized_entries(self, worktree_path);
entries.extend(new_entries);
}
}
Ok(NormalizedConversation {
entries,
session_id,
executor_type: "amp".to_string(),
prompt: None,
summary: None,
})
}
}
impl AmpExecutor {
/// Convert absolute paths to relative paths based on worktree path
fn make_path_relative(&self, path: &str, worktree_path: &str) -> String {
let path_obj = Path::new(path);
let worktree_obj = Path::new(worktree_path);
// If path is already relative, return as is
if path_obj.is_relative() {
return path.to_string();
}
// Try to make path relative to worktree path
if let Ok(relative_path) = path_obj.strip_prefix(worktree_obj) {
return relative_path.to_string_lossy().to_string();
}
// If we can't make it relative, return the original path
path.to_string()
}
fn generate_concise_content(
&self,
tool_name: &str,
input: &serde_json::Value,
action_type: &ActionType,
worktree_path: &str,
) -> String {
match action_type {
ActionType::FileRead { path } => format!("`{}`", path),
ActionType::FileWrite { path } => format!("`{}`", path),
ActionType::CommandRun { command } => format!("`{}`", command),
ActionType::Search { query } => format!("`{}`", query),
ActionType::WebFetch { url } => format!("`{}`", url),
ActionType::PlanPresentation { plan } => format!("Plan Presentation: `{}`", plan),
ActionType::TaskCreate { description } => description.clone(),
ActionType::Other { description: _ } => {
// For other tools, try to extract key information or fall back to tool name
match tool_name.to_lowercase().as_str() {
"todowrite" | "todoread" | "todo_write" | "todo_read" => {
if let Some(todos) = input.get("todos").and_then(|t| t.as_array()) {
let mut todo_items = Vec::new();
for todo in todos {
if let (Some(content), Some(status)) = (
todo.get("content").and_then(|c| c.as_str()),
todo.get("status").and_then(|s| s.as_str()),
) {
let emoji = match status {
"completed" => "",
"in_progress" | "in-progress" => "🔄",
"pending" | "todo" => "",
_ => "📝",
};
let priority = todo
.get("priority")
.and_then(|p| p.as_str())
.unwrap_or("medium");
todo_items
.push(format!("{} {} ({})", emoji, content, priority));
}
}
if !todo_items.is_empty() {
format!("TODO List:\n{}", todo_items.join("\n"))
} else {
"Managing TODO list".to_string()
}
} else {
"Managing TODO list".to_string()
}
}
"ls" => {
if let Some(path) = input.get("path").and_then(|p| p.as_str()) {
let relative_path = self.make_path_relative(path, worktree_path);
if relative_path.is_empty() {
"List directory".to_string()
} else {
format!("List directory: `{}`", relative_path)
}
} else {
"List directory".to_string()
}
}
"glob" => {
let pattern = input.get("pattern").and_then(|p| p.as_str()).unwrap_or("*");
let path = input.get("path").and_then(|p| p.as_str());
if let Some(path) = path {
let relative_path = self.make_path_relative(path, worktree_path);
format!("Find files: `{}` in `{}`", pattern, relative_path)
} else {
format!("Find files: `{}`", pattern)
}
}
"grep" => {
let pattern = input.get("pattern").and_then(|p| p.as_str()).unwrap_or("");
let include = input.get("include").and_then(|i| i.as_str());
let path = input.get("path").and_then(|p| p.as_str());
let mut parts = vec![format!("Search: `{}`", pattern)];
if let Some(include) = include {
parts.push(format!("in `{}`", include));
}
if let Some(path) = path {
let relative_path = self.make_path_relative(path, worktree_path);
parts.push(format!("at `{}`", relative_path));
}
parts.join(" ")
}
"read" => {
if let Some(file_path) = input.get("file_path").and_then(|p| p.as_str()) {
let relative_path = self.make_path_relative(file_path, worktree_path);
format!("Read file: `{}`", relative_path)
} else {
"Read file".to_string()
}
}
"write" => {
if let Some(file_path) = input.get("file_path").and_then(|p| p.as_str()) {
let relative_path = self.make_path_relative(file_path, worktree_path);
format!("Write file: `{}`", relative_path)
} else {
"Write file".to_string()
}
}
"edit" => {
if let Some(file_path) = input.get("file_path").and_then(|p| p.as_str()) {
let relative_path = self.make_path_relative(file_path, worktree_path);
format!("Edit file: `{}`", relative_path)
} else {
"Edit file".to_string()
}
}
"multiedit" => {
if let Some(file_path) = input.get("file_path").and_then(|p| p.as_str()) {
let relative_path = self.make_path_relative(file_path, worktree_path);
format!("Multi-edit file: `{}`", relative_path)
} else {
"Multi-edit file".to_string()
}
}
"bash" => {
if let Some(command) = input.get("command").and_then(|c| c.as_str()) {
format!("Run command: `{}`", command)
} else {
"Run command".to_string()
}
}
"webfetch" => {
if let Some(url) = input.get("url").and_then(|u| u.as_str()) {
format!("Fetch URL: `{}`", url)
} else {
"Fetch URL".to_string()
}
}
"task" => {
if let Some(description) = input.get("description").and_then(|d| d.as_str())
{
format!("Task: {}", description)
} else if let Some(prompt) = input.get("prompt").and_then(|p| p.as_str()) {
format!("Task: {}", prompt)
} else {
"Task".to_string()
}
}
_ => tool_name.to_string(),
}
}
}
}
fn extract_action_type(
&self,
tool_name: &str,
input: &serde_json::Value,
worktree_path: &str,
) -> ActionType {
match tool_name.to_lowercase().as_str() {
"read_file" | "read" => {
if let Some(path) = input.get("path").and_then(|p| p.as_str()) {
ActionType::FileRead {
path: self.make_path_relative(path, worktree_path),
}
} else if let Some(file_path) = input.get("file_path").and_then(|p| p.as_str()) {
ActionType::FileRead {
path: self.make_path_relative(file_path, worktree_path),
}
} else {
ActionType::Other {
description: "File read operation".to_string(),
}
}
}
"edit_file" | "write" | "create_file" | "edit" | "multiedit" => {
if let Some(path) = input.get("path").and_then(|p| p.as_str()) {
ActionType::FileWrite {
path: self.make_path_relative(path, worktree_path),
}
} else if let Some(file_path) = input.get("file_path").and_then(|p| p.as_str()) {
ActionType::FileWrite {
path: self.make_path_relative(file_path, worktree_path),
}
} else {
ActionType::Other {
description: "File write operation".to_string(),
}
}
}
"bash" | "run_command" => {
if let Some(cmd) = input.get("cmd").and_then(|c| c.as_str()) {
ActionType::CommandRun {
command: cmd.to_string(),
}
} else if let Some(command) = input.get("command").and_then(|c| c.as_str()) {
ActionType::CommandRun {
command: command.to_string(),
}
} else {
ActionType::Other {
description: "Command execution".to_string(),
}
}
}
"grep" | "search" => {
if let Some(pattern) = input.get("pattern").and_then(|p| p.as_str()) {
ActionType::Search {
query: pattern.to_string(),
}
} else if let Some(query) = input.get("query").and_then(|q| q.as_str()) {
ActionType::Search {
query: query.to_string(),
}
} else {
ActionType::Other {
description: "Search operation".to_string(),
}
}
}
"web_fetch" | "webfetch" => {
if let Some(url) = input.get("url").and_then(|u| u.as_str()) {
ActionType::WebFetch {
url: url.to_string(),
}
} else {
ActionType::Other {
description: "Web fetch operation".to_string(),
}
}
}
"task" => {
if let Some(description) = input.get("description").and_then(|d| d.as_str()) {
ActionType::TaskCreate {
description: description.to_string(),
}
} else if let Some(prompt) = input.get("prompt").and_then(|p| p.as_str()) {
ActionType::TaskCreate {
description: prompt.to_string(),
}
} else {
ActionType::Other {
description: "Task creation".to_string(),
}
}
}
"glob" => ActionType::Other {
description: "File pattern search".to_string(),
},
"ls" => ActionType::Other {
description: "List directory".to_string(),
},
"todowrite" | "todoread" | "todo_write" | "todo_read" => ActionType::Other {
description: "Manage TODO list".to_string(),
},
_ => ActionType::Other {
description: format!("Tool: {}", tool_name),
},
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_filter_streaming_messages() {
// Test logs that simulate the actual normalize_logs behavior
let amp_executor = AmpExecutor;
let logs = r#"{"type":"messages","messages":[[7,{"role":"assistant","content":[{"type":"text","text":"Created all three files: test1.txt, test2.txt, and test3.txt"}],"state":{"type":"streaming"}}]],"toolResults":[]}
{"type":"messages","messages":[[7,{"role":"assistant","content":[{"type":"text","text":"Created all three files: test1.txt, test2.txt, and test3.txt, each with a line of text."}],"state":{"type":"streaming"}}]],"toolResults":[]}
{"type":"messages","messages":[[7,{"role":"assistant","content":[{"type":"text","text":"Created all three files: test1.txt, test2.txt, and test3.txt, each with a line of text."}],"state":{"type":"complete","stopReason":"end_turn"}}]],"toolResults":[]}"#;
let result = amp_executor.normalize_logs(logs, "/tmp/test");
assert!(result.is_ok());
let conversation = result.unwrap();
// Should only have 1 assistant message (the complete one)
let assistant_messages: Vec<_> = conversation
.entries
.iter()
.filter(|e| matches!(e.entry_type, NormalizedEntryType::AssistantMessage))
.collect();
assert_eq!(assistant_messages.len(), 1);
assert_eq!(assistant_messages[0].content, "Created all three files: test1.txt, test2.txt, and test3.txt, each with a line of text.");
}
#[test]
fn test_filter_preserves_messages_without_state() {
// Test that messages without state metadata are preserved (for compatibility)
let amp_executor = AmpExecutor;
let logs = r#"{"type":"messages","messages":[[1,{"role":"assistant","content":[{"type":"text","text":"Regular message"}]}]],"toolResults":[]}"#;
let result = amp_executor.normalize_logs(logs, "/tmp/test");
assert!(result.is_ok());
let conversation = result.unwrap();
// Should have 1 assistant message
let assistant_messages: Vec<_> = conversation
.entries
.iter()
.filter(|e| matches!(e.entry_type, NormalizedEntryType::AssistantMessage))
.collect();
assert_eq!(assistant_messages.len(), 1);
assert_eq!(assistant_messages[0].content, "Regular message");
}
}

View File

@@ -1,91 +0,0 @@
use async_trait::async_trait;
use uuid::Uuid;
use crate::{
command_runner::CommandProcess,
executor::{Executor, ExecutorError, NormalizedConversation},
executors::ClaudeExecutor,
};
/// An executor that uses Claude Code Router (CCR) to process tasks
/// This is a thin wrapper around ClaudeExecutor that uses Claude Code Router instead of Claude CLI
pub struct CCRExecutor(ClaudeExecutor);
impl Default for CCRExecutor {
fn default() -> Self {
Self::new()
}
}
impl CCRExecutor {
pub fn new() -> Self {
Self(ClaudeExecutor::with_command(
"claude-code-router".to_string(),
"npx -y @musistudio/claude-code-router code -p --dangerously-skip-permissions --verbose --output-format=stream-json".to_string(),
))
}
}
#[async_trait]
impl Executor for CCRExecutor {
async fn spawn(
&self,
pool: &sqlx::SqlitePool,
task_id: Uuid,
worktree_path: &str,
) -> Result<CommandProcess, ExecutorError> {
self.0.spawn(pool, task_id, worktree_path).await
}
async fn spawn_followup(
&self,
pool: &sqlx::SqlitePool,
task_id: Uuid,
session_id: &str,
prompt: &str,
worktree_path: &str,
) -> Result<CommandProcess, ExecutorError> {
self.0
.spawn_followup(pool, task_id, session_id, prompt, worktree_path)
.await
}
fn normalize_logs(
&self,
logs: &str,
worktree_path: &str,
) -> Result<NormalizedConversation, String> {
let filtered_logs = filter_ccr_service_messages(logs);
let mut result = self.0.normalize_logs(&filtered_logs, worktree_path)?;
result.executor_type = "claude-code-router".to_string();
Ok(result)
}
}
/// Filter out CCR service messages that appear in stdout but shouldn't be shown to users
/// These are informational messages from the CCR wrapper itself
fn filter_ccr_service_messages(logs: &str) -> String {
logs.lines()
.filter(|line| {
let trimmed = line.trim();
// Filter out known CCR service messages
if trimmed.eq("Service not running, starting service...")
|| trimmed.eq("claude code router service has been successfully stopped.")
{
return false;
}
// Filter out system init JSON that contains misleading model information
// CCR delegates to different models, so the init model info is incorrect
if trimmed.starts_with(r#"{"type":"system","subtype":"init""#)
&& trimmed.contains(r#""model":"#)
{
return false;
}
true
})
.collect::<Vec<&str>>()
.join("\n")
}

View File

@@ -1,99 +0,0 @@
use async_trait::async_trait;
use uuid::Uuid;
use crate::{
command_runner::{CommandProcess, CommandRunner},
executor::{Executor, ExecutorError},
models::task::Task,
utils::shell::get_shell_command,
};
/// An executor that uses OpenCode to process tasks
pub struct CharmOpencodeExecutor;
#[async_trait]
impl Executor for CharmOpencodeExecutor {
async fn spawn(
&self,
pool: &sqlx::SqlitePool,
task_id: Uuid,
worktree_path: &str,
) -> Result<CommandProcess, ExecutorError> {
// Get the task to fetch its description
let task = Task::find_by_id(pool, task_id)
.await?
.ok_or(ExecutorError::TaskNotFound)?;
let prompt = if let Some(task_description) = task.description {
format!(
r#"project_id: {}
Task title: {}
Task description: {}"#,
task.project_id, task.title, task_description
)
} else {
format!(
r#"project_id: {}
Task title: {}"#,
task.project_id, task.title
)
};
// Use shell command for cross-platform compatibility
let (shell_cmd, shell_arg) = get_shell_command();
let opencode_command = format!(
"opencode -p \"{}\" --output-format=json",
prompt.replace('"', "\\\"")
);
let mut command = CommandRunner::new();
command
.command(shell_cmd)
.arg(shell_arg)
.arg(&opencode_command)
.working_dir(worktree_path);
let proc = command.start().await.map_err(|e| {
crate::executor::SpawnContext::from_command(&command, "CharmOpenCode")
.with_task(task_id, Some(task.title.clone()))
.with_context("CharmOpenCode CLI execution for new task")
.spawn_error(e)
})?;
Ok(proc)
}
async fn spawn_followup(
&self,
_pool: &sqlx::SqlitePool,
_task_id: Uuid,
_session_id: &str,
prompt: &str,
worktree_path: &str,
) -> Result<CommandProcess, ExecutorError> {
// CharmOpencode doesn't support session-based followup, so we ignore session_id
// and just run with the new prompt
let (shell_cmd, shell_arg) = get_shell_command();
let opencode_command = format!(
"opencode -p \"{}\" --output-format=json",
prompt.replace('"', "\\\"")
);
let mut command = CommandRunner::new();
command
.command(shell_cmd)
.arg(shell_arg)
.arg(&opencode_command)
.working_dir(worktree_path);
let proc = command.start().await.map_err(|e| {
crate::executor::SpawnContext::from_command(&command, "CharmOpenCode")
.with_context("CharmOpenCode CLI followup execution")
.spawn_error(e)
})?;
Ok(proc)
}
}

View File

@@ -1,823 +0,0 @@
use std::path::Path;
use async_trait::async_trait;
use uuid::Uuid;
use crate::{
command_runner::{CommandProcess, CommandRunner},
executor::{
ActionType, Executor, ExecutorError, NormalizedConversation, NormalizedEntry,
NormalizedEntryType,
},
models::task::Task,
utils::shell::get_shell_command,
};
fn create_watchkill_script(command: &str) -> String {
let claude_plan_stop_indicator = "Exit plan mode?";
format!(
r#"#!/usr/bin/env bash
set -euo pipefail
word="{}"
command="{}"
exit_code=0
while IFS= read -r line; do
printf '%s\n' "$line"
if [[ $line == *"$word"* ]]; then
exit 0
fi
done < <($command <&0 2>&1)
exit_code=${{PIPESTATUS[0]}}
exit "$exit_code"
"#,
claude_plan_stop_indicator, command
)
}
/// An executor that uses Claude CLI to process tasks
pub struct ClaudeExecutor {
executor_type: String,
command: String,
}
impl Default for ClaudeExecutor {
fn default() -> Self {
Self::new()
}
}
impl ClaudeExecutor {
/// Create a new ClaudeExecutor with default settings
pub fn new() -> Self {
Self {
executor_type: "Claude Code".to_string(),
command: "npx -y @anthropic-ai/claude-code@latest -p --dangerously-skip-permissions --verbose --output-format=stream-json".to_string(),
}
}
pub fn new_plan_mode() -> Self {
let command = "npx -y @anthropic-ai/claude-code@latest -p --permission-mode=plan --verbose --output-format=stream-json";
let script = create_watchkill_script(command);
Self {
executor_type: "ClaudePlan".to_string(),
command: script,
}
}
/// Create a new ClaudeExecutor with custom settings
pub fn with_command(executor_type: String, command: String) -> Self {
Self {
executor_type,
command,
}
}
}
#[async_trait]
impl Executor for ClaudeExecutor {
async fn spawn(
&self,
pool: &sqlx::SqlitePool,
task_id: Uuid,
worktree_path: &str,
) -> Result<CommandProcess, ExecutorError> {
// Get the task to fetch its description
let task = Task::find_by_id(pool, task_id)
.await?
.ok_or(ExecutorError::TaskNotFound)?;
let prompt = if let Some(task_description) = task.description {
format!(
r#"project_id: {}
Task title: {}
Task description: {}"#,
task.project_id, task.title, task_description
)
} else {
format!(
r#"project_id: {}
Task title: {}"#,
task.project_id, task.title
)
};
// Use shell command for cross-platform compatibility
let (shell_cmd, shell_arg) = get_shell_command();
// Pass prompt via stdin instead of command line to avoid shell escaping issues
let claude_command = &self.command;
let mut command = CommandRunner::new();
command
.command(shell_cmd)
.arg(shell_arg)
.arg(claude_command)
.stdin(&prompt)
.working_dir(worktree_path)
.env("NODE_NO_WARNINGS", "1");
let proc = command.start().await.map_err(|e| {
crate::executor::SpawnContext::from_command(&command, &self.executor_type)
.with_task(task_id, Some(task.title.clone()))
.with_context(format!("{} CLI execution for new task", self.executor_type))
.spawn_error(e)
})?;
Ok(proc)
}
async fn spawn_followup(
&self,
_pool: &sqlx::SqlitePool,
_task_id: Uuid,
session_id: &str,
prompt: &str,
worktree_path: &str,
) -> Result<CommandProcess, ExecutorError> {
// Use shell command for cross-platform compatibility
let (shell_cmd, shell_arg) = get_shell_command();
// Determine the command based on whether this is plan mode or not
let claude_command = if self.executor_type == "ClaudePlan" {
let command = format!(
"npx -y @anthropic-ai/claude-code@latest -p --permission-mode=plan --verbose --output-format=stream-json --resume={}",
session_id
);
create_watchkill_script(&command)
} else {
format!("{} --resume={}", self.command, session_id)
};
let mut command = CommandRunner::new();
command
.command(shell_cmd)
.arg(shell_arg)
.arg(&claude_command)
.stdin(prompt)
.working_dir(worktree_path)
.env("NODE_NO_WARNINGS", "1");
let proc = command.start().await.map_err(|e| {
crate::executor::SpawnContext::from_command(&command, &self.executor_type)
.with_context(format!(
"{} CLI followup execution for session {}",
self.executor_type, session_id
))
.spawn_error(e)
})?;
Ok(proc)
}
fn normalize_logs(
&self,
logs: &str,
worktree_path: &str,
) -> Result<NormalizedConversation, String> {
use serde_json::Value;
let mut entries = Vec::new();
let mut session_id = None;
for line in logs.lines() {
let trimmed = line.trim();
if trimmed.is_empty() {
continue;
}
// Try to parse as JSON
let json: Value = match serde_json::from_str(trimmed) {
Ok(json) => json,
Err(_) => {
// If line isn't valid JSON, add it as raw text
entries.push(NormalizedEntry {
timestamp: None,
entry_type: NormalizedEntryType::SystemMessage,
content: format!("Raw output: {}", trimmed),
metadata: None,
});
continue;
}
};
// Extract session ID
if session_id.is_none() {
if let Some(sess_id) = json.get("session_id").and_then(|v| v.as_str()) {
session_id = Some(sess_id.to_string());
}
}
// Process different message types
let processed = if let Some(msg_type) = json.get("type").and_then(|t| t.as_str()) {
match msg_type {
"assistant" => {
if let Some(message) = json.get("message") {
if let Some(content) = message.get("content").and_then(|c| c.as_array())
{
for content_item in content {
if let Some(content_type) =
content_item.get("type").and_then(|t| t.as_str())
{
match content_type {
"text" => {
if let Some(text) = content_item
.get("text")
.and_then(|t| t.as_str())
{
entries.push(NormalizedEntry {
timestamp: None,
entry_type:
NormalizedEntryType::AssistantMessage,
content: text.to_string(),
metadata: Some(content_item.clone()),
});
}
}
"tool_use" => {
if let Some(tool_name) = content_item
.get("name")
.and_then(|n| n.as_str())
{
let input = content_item
.get("input")
.unwrap_or(&Value::Null);
let action_type = self.extract_action_type(
tool_name,
input,
worktree_path,
);
let content = self.generate_concise_content(
tool_name,
input,
&action_type,
worktree_path,
);
entries.push(NormalizedEntry {
timestamp: None,
entry_type: NormalizedEntryType::ToolUse {
tool_name: tool_name.to_string(),
action_type,
},
content,
metadata: Some(content_item.clone()),
});
}
}
_ => {}
}
}
}
}
}
true
}
"user" => {
if let Some(message) = json.get("message") {
if let Some(content) = message.get("content").and_then(|c| c.as_array())
{
for content_item in content {
if let Some(content_type) =
content_item.get("type").and_then(|t| t.as_str())
{
if content_type == "text" {
if let Some(text) =
content_item.get("text").and_then(|t| t.as_str())
{
entries.push(NormalizedEntry {
timestamp: None,
entry_type: NormalizedEntryType::UserMessage,
content: text.to_string(),
metadata: Some(content_item.clone()),
});
}
}
}
}
}
}
true
}
"system" => {
if let Some(subtype) = json.get("subtype").and_then(|s| s.as_str()) {
if subtype == "init" {
entries.push(NormalizedEntry {
timestamp: None,
entry_type: NormalizedEntryType::SystemMessage,
content: format!(
"System initialized with model: {}",
json.get("model")
.and_then(|m| m.as_str())
.unwrap_or("unknown")
),
metadata: Some(json.clone()),
});
}
}
true
}
_ => false,
}
} else {
false
};
// If JSON didn't match expected patterns, add it as unrecognized JSON
// Skip JSON with type "result" as requested
if !processed {
if let Some(msg_type) = json.get("type").and_then(|t| t.as_str()) {
if msg_type == "result" {
// Skip result entries
continue;
}
}
entries.push(NormalizedEntry {
timestamp: None,
entry_type: NormalizedEntryType::SystemMessage,
content: format!("Unrecognized JSON: {}", trimmed),
metadata: Some(json),
});
}
}
Ok(NormalizedConversation {
entries,
session_id,
executor_type: self.executor_type.clone(),
prompt: None,
summary: None,
})
}
}
impl ClaudeExecutor {
/// Convert absolute paths to relative paths based on worktree path
fn make_path_relative(&self, path: &str, worktree_path: &str) -> String {
let path_obj = Path::new(path);
let worktree_path_obj = Path::new(worktree_path);
tracing::debug!("Making path relative: {} -> {}", path, worktree_path);
// If path is already relative, return as is
if path_obj.is_relative() {
return path.to_string();
}
// Try to make path relative to the worktree path
match path_obj.strip_prefix(worktree_path_obj) {
Ok(relative_path) => {
let result = relative_path.to_string_lossy().to_string();
tracing::debug!("Successfully made relative: '{}' -> '{}'", path, result);
result
}
Err(_) => {
// Handle symlinks by resolving canonical paths
let canonical_path = std::fs::canonicalize(path);
let canonical_worktree = std::fs::canonicalize(worktree_path);
match (canonical_path, canonical_worktree) {
(Ok(canon_path), Ok(canon_worktree)) => {
tracing::debug!(
"Trying canonical path resolution: '{}' -> '{}', '{}' -> '{}'",
path,
canon_path.display(),
worktree_path,
canon_worktree.display()
);
match canon_path.strip_prefix(&canon_worktree) {
Ok(relative_path) => {
let result = relative_path.to_string_lossy().to_string();
tracing::debug!(
"Successfully made relative with canonical paths: '{}' -> '{}'",
path,
result
);
result
}
Err(e) => {
tracing::warn!(
"Failed to make canonical path relative: '{}' relative to '{}', error: {}, returning original",
canon_path.display(),
canon_worktree.display(),
e
);
path.to_string()
}
}
}
_ => {
tracing::debug!(
"Could not canonicalize paths (paths may not exist): '{}', '{}', returning original",
path,
worktree_path
);
path.to_string()
}
}
}
}
}
fn generate_concise_content(
&self,
tool_name: &str,
input: &serde_json::Value,
action_type: &ActionType,
worktree_path: &str,
) -> String {
match action_type {
ActionType::FileRead { path } => format!("`{}`", path),
ActionType::FileWrite { path } => format!("`{}`", path),
ActionType::CommandRun { command } => format!("`{}`", command),
ActionType::Search { query } => format!("`{}`", query),
ActionType::WebFetch { url } => format!("`{}`", url),
ActionType::TaskCreate { description } => description.clone(),
ActionType::PlanPresentation { plan } => plan.clone(),
ActionType::Other { description: _ } => {
// For other tools, try to extract key information or fall back to tool name
match tool_name.to_lowercase().as_str() {
"todoread" | "todowrite" => {
// Extract todo list from input to show actual todos
if let Some(todos) = input.get("todos").and_then(|t| t.as_array()) {
let mut todo_items = Vec::new();
for todo in todos {
if let Some(content) = todo.get("content").and_then(|c| c.as_str())
{
let status = todo
.get("status")
.and_then(|s| s.as_str())
.unwrap_or("pending");
let status_emoji = match status {
"completed" => "",
"in_progress" => "🔄",
"pending" | "todo" => "",
_ => "📝",
};
let priority = todo
.get("priority")
.and_then(|p| p.as_str())
.unwrap_or("medium");
todo_items.push(format!(
"{} {} ({})",
status_emoji, content, priority
));
}
}
if !todo_items.is_empty() {
format!("TODO List:\n{}", todo_items.join("\n"))
} else {
"Managing TODO list".to_string()
}
} else {
"Managing TODO list".to_string()
}
}
"ls" => {
if let Some(path) = input.get("path").and_then(|p| p.as_str()) {
let relative_path = self.make_path_relative(path, worktree_path);
if relative_path.is_empty() {
"List directory".to_string()
} else {
format!("List directory: `{}`", relative_path)
}
} else {
"List directory".to_string()
}
}
"glob" => {
let pattern = input.get("pattern").and_then(|p| p.as_str()).unwrap_or("*");
let path = input.get("path").and_then(|p| p.as_str());
if let Some(search_path) = path {
format!(
"Find files: `{}` in `{}`",
pattern,
self.make_path_relative(search_path, worktree_path)
)
} else {
format!("Find files: `{}`", pattern)
}
}
"codebase_search_agent" => {
if let Some(query) = input.get("query").and_then(|q| q.as_str()) {
format!("Search: {}", query)
} else {
"Codebase search".to_string()
}
}
_ => tool_name.to_string(),
}
}
}
}
fn extract_action_type(
&self,
tool_name: &str,
input: &serde_json::Value,
worktree_path: &str,
) -> ActionType {
match tool_name.to_lowercase().as_str() {
"read" => {
if let Some(file_path) = input.get("file_path").and_then(|p| p.as_str()) {
ActionType::FileRead {
path: self.make_path_relative(file_path, worktree_path),
}
} else {
ActionType::Other {
description: "File read operation".to_string(),
}
}
}
"edit" | "write" | "multiedit" => {
if let Some(file_path) = input.get("file_path").and_then(|p| p.as_str()) {
ActionType::FileWrite {
path: self.make_path_relative(file_path, worktree_path),
}
} else if let Some(path) = input.get("path").and_then(|p| p.as_str()) {
ActionType::FileWrite {
path: self.make_path_relative(path, worktree_path),
}
} else {
ActionType::Other {
description: "File write operation".to_string(),
}
}
}
"bash" => {
if let Some(command) = input.get("command").and_then(|c| c.as_str()) {
ActionType::CommandRun {
command: command.to_string(),
}
} else {
ActionType::Other {
description: "Command execution".to_string(),
}
}
}
"grep" => {
if let Some(pattern) = input.get("pattern").and_then(|p| p.as_str()) {
ActionType::Search {
query: pattern.to_string(),
}
} else {
ActionType::Other {
description: "Search operation".to_string(),
}
}
}
"glob" => {
if let Some(pattern) = input.get("pattern").and_then(|p| p.as_str()) {
ActionType::Other {
description: format!("Find files: {}", pattern),
}
} else {
ActionType::Other {
description: "File pattern search".to_string(),
}
}
}
"webfetch" => {
if let Some(url) = input.get("url").and_then(|u| u.as_str()) {
ActionType::WebFetch {
url: url.to_string(),
}
} else {
ActionType::Other {
description: "Web fetch operation".to_string(),
}
}
}
"task" => {
if let Some(description) = input.get("description").and_then(|d| d.as_str()) {
ActionType::TaskCreate {
description: description.to_string(),
}
} else if let Some(prompt) = input.get("prompt").and_then(|p| p.as_str()) {
ActionType::TaskCreate {
description: prompt.to_string(),
}
} else {
ActionType::Other {
description: "Task creation".to_string(),
}
}
}
"exit_plan_mode" | "exitplanmode" | "exit-plan-mode" => {
if let Some(plan) = input.get("plan").and_then(|p| p.as_str()) {
ActionType::PlanPresentation {
plan: plan.to_string(),
}
} else {
ActionType::Other {
description: "Plan presentation".to_string(),
}
}
}
_ => ActionType::Other {
description: format!("Tool: {}", tool_name),
},
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_normalize_logs_ignores_result_type() {
let executor = ClaudeExecutor::new();
let logs = r#"{"type":"system","subtype":"init","cwd":"/private/tmp","session_id":"e988eeea-3712-46a1-82d4-84fbfaa69114","tools":[],"model":"claude-sonnet-4-20250514"}
{"type":"assistant","message":{"id":"msg_123","type":"message","role":"assistant","model":"claude-sonnet-4-20250514","content":[{"type":"text","text":"Hello world"}],"stop_reason":null},"session_id":"e988eeea-3712-46a1-82d4-84fbfaa69114"}
{"type":"result","subtype":"success","is_error":false,"duration_ms":6059,"result":"Final result"}
{"type":"unknown","data":"some data"}"#;
let result = executor.normalize_logs(logs, "/tmp/test-worktree").unwrap();
// Should have system message, assistant message, and unknown message
// but NOT the result message
assert_eq!(result.entries.len(), 3);
// Check that no entry contains "result"
for entry in &result.entries {
assert!(!entry.content.contains("result"));
}
// Check that unknown JSON is still processed
assert!(result
.entries
.iter()
.any(|e| e.content.contains("Unrecognized JSON")));
}
#[test]
fn test_make_path_relative() {
let executor = ClaudeExecutor::new();
// Test with relative path (should remain unchanged)
assert_eq!(
executor.make_path_relative("src/main.rs", "/tmp/test-worktree"),
"src/main.rs"
);
// Test with absolute path (should become relative if possible)
let test_worktree = "/tmp/test-worktree";
let absolute_path = format!("{}/src/main.rs", test_worktree);
let result = executor.make_path_relative(&absolute_path, test_worktree);
assert_eq!(result, "src/main.rs");
}
#[test]
fn test_todo_tool_content_extraction() {
let executor = ClaudeExecutor::new();
// Test TodoWrite with actual todo list
let todo_input = serde_json::json!({
"todos": [
{
"id": "1",
"content": "Fix the navigation bug",
"status": "completed",
"priority": "high"
},
{
"id": "2",
"content": "Add user authentication",
"status": "in_progress",
"priority": "medium"
},
{
"id": "3",
"content": "Write documentation",
"status": "pending",
"priority": "low"
}
]
});
let result = executor.generate_concise_content(
"TodoWrite",
&todo_input,
&ActionType::Other {
description: "Tool: TodoWrite".to_string(),
},
"/tmp/test-worktree",
);
assert!(result.contains("TODO List:"));
assert!(result.contains("✅ Fix the navigation bug (high)"));
assert!(result.contains("🔄 Add user authentication (medium)"));
assert!(result.contains("⏳ Write documentation (low)"));
}
#[test]
fn test_todo_tool_empty_list() {
let executor = ClaudeExecutor::new();
// Test TodoWrite with empty todo list
let empty_input = serde_json::json!({
"todos": []
});
let result = executor.generate_concise_content(
"TodoWrite",
&empty_input,
&ActionType::Other {
description: "Tool: TodoWrite".to_string(),
},
"/tmp/test-worktree",
);
assert_eq!(result, "Managing TODO list");
}
#[test]
fn test_todo_tool_no_todos_field() {
let executor = ClaudeExecutor::new();
// Test TodoWrite with no todos field
let no_todos_input = serde_json::json!({
"other_field": "value"
});
let result = executor.generate_concise_content(
"TodoWrite",
&no_todos_input,
&ActionType::Other {
description: "Tool: TodoWrite".to_string(),
},
"/tmp/test-worktree",
);
assert_eq!(result, "Managing TODO list");
}
#[test]
fn test_glob_tool_content_extraction() {
let executor = ClaudeExecutor::new();
// Test Glob with pattern and path
let glob_input = serde_json::json!({
"pattern": "**/*.ts",
"path": "/tmp/test-worktree/src"
});
let result = executor.generate_concise_content(
"Glob",
&glob_input,
&ActionType::Other {
description: "Find files: **/*.ts".to_string(),
},
"/tmp/test-worktree",
);
assert_eq!(result, "Find files: `**/*.ts` in `src`");
}
#[test]
fn test_glob_tool_pattern_only() {
let executor = ClaudeExecutor::new();
// Test Glob with pattern only
let glob_input = serde_json::json!({
"pattern": "*.js"
});
let result = executor.generate_concise_content(
"Glob",
&glob_input,
&ActionType::Other {
description: "Find files: *.js".to_string(),
},
"/tmp/test-worktree",
);
assert_eq!(result, "Find files: `*.js`");
}
#[test]
fn test_ls_tool_content_extraction() {
let executor = ClaudeExecutor::new();
// Test LS with path
let ls_input = serde_json::json!({
"path": "/tmp/test-worktree/components"
});
let result = executor.generate_concise_content(
"LS",
&ls_input,
&ActionType::Other {
description: "Tool: LS".to_string(),
},
"/tmp/test-worktree",
);
assert_eq!(result, "List directory: `components`");
}
}

View File

@@ -1,121 +0,0 @@
use async_trait::async_trait;
use uuid::Uuid;
use crate::{
command_runner::{CommandProcess, CommandRunner},
executor::{Executor, ExecutorError},
models::{project::Project, task::Task},
utils::shell::get_shell_command,
};
/// Executor for running project cleanup scripts
pub struct CleanupScriptExecutor {
pub script: String,
}
#[async_trait]
impl Executor for CleanupScriptExecutor {
async fn spawn(
&self,
pool: &sqlx::SqlitePool,
task_id: Uuid,
worktree_path: &str,
) -> Result<CommandProcess, ExecutorError> {
// Validate the task and project exist
let task = Task::find_by_id(pool, task_id)
.await?
.ok_or(ExecutorError::TaskNotFound)?;
let _project = Project::find_by_id(pool, task.project_id)
.await?
.ok_or(ExecutorError::TaskNotFound)?; // Reuse TaskNotFound for simplicity
let (shell_cmd, shell_arg) = get_shell_command();
let mut command = CommandRunner::new();
command
.command(shell_cmd)
.arg(shell_arg)
.arg(&self.script)
.working_dir(worktree_path);
let proc = command.start().await.map_err(|e| {
crate::executor::SpawnContext::from_command(&command, "CleanupScript")
.with_task(task_id, Some(task.title.clone()))
.with_context("Cleanup script execution")
.spawn_error(e)
})?;
Ok(proc)
}
/// Normalize cleanup script logs into a readable format
fn normalize_logs(
&self,
logs: &str,
_worktree_path: &str,
) -> Result<crate::executor::NormalizedConversation, String> {
let mut entries = Vec::new();
// Add script command as first entry
entries.push(crate::executor::NormalizedEntry {
timestamp: None,
entry_type: crate::executor::NormalizedEntryType::SystemMessage,
content: format!("Executing cleanup script:\n{}", self.script),
metadata: None,
});
// Process the logs - split by lines and create entries
if !logs.trim().is_empty() {
let lines: Vec<&str> = logs.lines().collect();
let mut current_chunk = String::new();
for line in lines {
current_chunk.push_str(line);
current_chunk.push('\n');
// Create entry for every 10 lines or when we encounter an error-like line
if current_chunk.lines().count() >= 10
|| line.to_lowercase().contains("error")
|| line.to_lowercase().contains("failed")
|| line.to_lowercase().contains("exception")
{
let entry_type = if line.to_lowercase().contains("error")
|| line.to_lowercase().contains("failed")
|| line.to_lowercase().contains("exception")
{
crate::executor::NormalizedEntryType::ErrorMessage
} else {
crate::executor::NormalizedEntryType::SystemMessage
};
entries.push(crate::executor::NormalizedEntry {
timestamp: Some(chrono::Utc::now().to_rfc3339()),
entry_type,
content: current_chunk.trim().to_string(),
metadata: None,
});
current_chunk.clear();
}
}
// Add any remaining content
if !current_chunk.trim().is_empty() {
entries.push(crate::executor::NormalizedEntry {
timestamp: Some(chrono::Utc::now().to_rfc3339()),
entry_type: crate::executor::NormalizedEntryType::SystemMessage,
content: current_chunk.trim().to_string(),
metadata: None,
});
}
}
Ok(crate::executor::NormalizedConversation {
entries,
session_id: None,
executor_type: "cleanup-script".to_string(),
prompt: Some(self.script.clone()),
summary: None,
})
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,50 +0,0 @@
use async_trait::async_trait;
use uuid::Uuid;
use crate::{
command_runner::{CommandProcess, CommandRunner},
executor::{Executor, ExecutorError},
models::{project::Project, task::Task},
utils::shell::get_shell_command,
};
/// Executor for running project dev server scripts
pub struct DevServerExecutor {
pub script: String,
}
#[async_trait]
impl Executor for DevServerExecutor {
async fn spawn(
&self,
pool: &sqlx::SqlitePool,
task_id: Uuid,
worktree_path: &str,
) -> Result<CommandProcess, ExecutorError> {
// Validate the task and project exist
let task = Task::find_by_id(pool, task_id)
.await?
.ok_or(ExecutorError::TaskNotFound)?;
let _project = Project::find_by_id(pool, task.project_id)
.await?
.ok_or(ExecutorError::TaskNotFound)?; // Reuse TaskNotFound for simplicity
let (shell_cmd, shell_arg) = get_shell_command();
let mut runner = CommandRunner::new();
runner
.command(shell_cmd)
.arg(shell_arg)
.arg(&self.script)
.working_dir(worktree_path);
let process = runner.start().await.map_err(|e| {
crate::executor::SpawnContext::from_command(&runner, "DevServer")
.with_task(task_id, Some(task.title.clone()))
.with_context("Development server execution")
.spawn_error(e)
})?;
Ok(process)
}
}

View File

@@ -1,74 +0,0 @@
use async_trait::async_trait;
use uuid::Uuid;
use crate::{
command_runner::{CommandProcess, CommandRunner},
executor::{Executor, ExecutorError, SpawnContext},
models::task::Task,
utils::shell::get_shell_command,
};
/// A dummy executor that echoes the task title and description
pub struct EchoExecutor;
#[async_trait]
impl Executor for EchoExecutor {
async fn spawn(
&self,
pool: &sqlx::SqlitePool,
task_id: Uuid,
_worktree_path: &str,
) -> Result<CommandProcess, ExecutorError> {
// Get the task to fetch its description
let task = Task::find_by_id(pool, task_id)
.await?
.ok_or(ExecutorError::TaskNotFound)?;
let _message = format!(
"Executing task: {} - {}",
task.title,
task.description.as_deref().unwrap_or("No description")
);
// For demonstration of streaming, we can use a shell command that outputs multiple lines
let (shell_cmd, shell_arg) = get_shell_command();
let script = if shell_cmd == "cmd" {
// Windows batch script
format!(
r#"echo Starting task: {}
for /l %%i in (1,1,50) do (
echo Progress line %%i
timeout /t 1 /nobreak > nul
)
echo Task completed: {}"#,
task.title, task.title
)
} else {
// Unix shell script (bash/sh)
format!(
r#"echo "Starting task: {}"
for i in {{1..50}}; do
echo "Progress line $i"
sleep 1
done
echo "Task completed: {}""#,
task.title, task.title
)
};
let mut command_runner = CommandRunner::new();
command_runner
.command(shell_cmd)
.arg(shell_arg)
.arg(&script);
let child = command_runner.start().await.map_err(|e| {
SpawnContext::from_command(&command_runner, "Echo")
.with_task(task_id, Some(task.title.clone()))
.with_context("Shell script execution for echo demo")
.spawn_error(e)
})?;
Ok(child)
}
}

View File

@@ -1,697 +0,0 @@
//! Gemini executor implementation
//!
//! This module provides Gemini CLI-based task execution with streaming support.
mod config;
mod streaming;
use std::time::Instant;
use async_trait::async_trait;
use config::{
max_chunk_size, max_display_size, max_latency_ms, max_message_size, GeminiStreamConfig,
};
// Re-export for external use
use serde_json::Value;
pub use streaming::GeminiPatchBatch;
use streaming::GeminiStreaming;
use uuid::Uuid;
use crate::{
command_runner::{CommandProcess, CommandRunner},
executor::{
Executor, ExecutorError, NormalizedConversation, NormalizedEntry, NormalizedEntryType,
},
models::task::Task,
utils::shell::get_shell_command,
};
/// An executor that uses Gemini CLI to process tasks
pub struct GeminiExecutor;
#[async_trait]
impl Executor for GeminiExecutor {
async fn spawn(
&self,
pool: &sqlx::SqlitePool,
task_id: Uuid,
worktree_path: &str,
) -> Result<CommandProcess, ExecutorError> {
// Get the task to fetch its description
let task = Task::find_by_id(pool, task_id)
.await?
.ok_or(ExecutorError::TaskNotFound)?;
let prompt = if let Some(task_description) = task.description {
format!(
r#"project_id: {}
Task title: {}
Task description: {}"#,
task.project_id, task.title, task_description
)
} else {
format!(
r#"project_id: {}
Task title: {}"#,
task.project_id, task.title
)
};
let mut command = Self::create_gemini_command(worktree_path);
command.stdin(&prompt);
let proc = command.start().await.map_err(|e| {
crate::executor::SpawnContext::from_command(&command, "Gemini")
.with_task(task_id, Some(task.title.clone()))
.with_context("Gemini CLI execution for new task")
.spawn_error(e)
})?;
tracing::info!("Successfully started Gemini process for task {}", task_id);
Ok(proc)
}
async fn execute_streaming(
&self,
pool: &sqlx::SqlitePool,
task_id: Uuid,
attempt_id: Uuid,
execution_process_id: Uuid,
worktree_path: &str,
) -> Result<CommandProcess, ExecutorError> {
tracing::info!(
"Starting Gemini execution for task {} attempt {}",
task_id,
attempt_id
);
Self::update_session_id(pool, execution_process_id, &attempt_id.to_string()).await;
let mut proc = self.spawn(pool, task_id, worktree_path).await?;
tracing::info!(
"Gemini process spawned successfully for attempt {}",
attempt_id
);
Self::setup_streaming(pool, &mut proc, attempt_id, execution_process_id).await;
Ok(proc)
}
async fn spawn_followup(
&self,
pool: &sqlx::SqlitePool,
task_id: Uuid,
session_id: &str,
prompt: &str,
worktree_path: &str,
) -> Result<CommandProcess, ExecutorError> {
// For Gemini, session_id is the attempt_id
let attempt_id = Uuid::parse_str(session_id)
.map_err(|_| ExecutorError::InvalidSessionId(session_id.to_string()))?;
let task = self.load_task(pool, task_id).await?;
let resume_context = self.collect_resume_context(pool, &task, attempt_id).await?;
let comprehensive_prompt = self.build_comprehensive_prompt(&task, &resume_context, prompt);
self.spawn_process(worktree_path, &comprehensive_prompt, attempt_id)
.await
}
async fn execute_followup_streaming(
&self,
pool: &sqlx::SqlitePool,
task_id: Uuid,
attempt_id: Uuid,
execution_process_id: Uuid,
session_id: &str,
prompt: &str,
worktree_path: &str,
) -> Result<CommandProcess, ExecutorError> {
tracing::info!(
"Starting Gemini follow-up execution for attempt {} (session {})",
attempt_id,
session_id
);
// For Gemini, session_id is the attempt_id - update it in the database
Self::update_session_id(pool, execution_process_id, session_id).await;
let mut proc = self
.spawn_followup(pool, task_id, session_id, prompt, worktree_path)
.await?;
tracing::info!(
"Gemini follow-up process spawned successfully for attempt {}",
attempt_id
);
Self::setup_streaming(pool, &mut proc, attempt_id, execution_process_id).await;
Ok(proc)
}
fn normalize_logs(
&self,
logs: &str,
_worktree_path: &str,
) -> Result<NormalizedConversation, String> {
let mut entries: Vec<NormalizedEntry> = Vec::new();
let mut parse_errors = Vec::new();
for (line_num, line) in logs.lines().enumerate() {
let trimmed = line.trim();
if trimmed.is_empty() {
continue;
}
// Try to parse as JSON first (for NormalizedEntry format)
if trimmed.starts_with('{') {
match serde_json::from_str::<NormalizedEntry>(trimmed) {
Ok(entry) => {
entries.push(entry);
}
Err(e) => {
tracing::warn!(
"Failed to parse JSONL line {} in Gemini logs: {} - Line: {}",
line_num + 1,
e,
trimmed
);
parse_errors.push(format!("Line {}: {}", line_num + 1, e));
// Create a fallback entry for unrecognized JSON
let fallback_entry = NormalizedEntry {
timestamp: Some(chrono::Utc::now().to_rfc3339()),
entry_type: NormalizedEntryType::SystemMessage,
content: format!("Raw output: {}", trimmed),
metadata: None,
};
entries.push(fallback_entry);
}
}
} else {
// For non-JSON lines, treat as plain text content
let text_entry = NormalizedEntry {
timestamp: Some(chrono::Utc::now().to_rfc3339()),
entry_type: NormalizedEntryType::AssistantMessage,
content: trimmed.to_string(),
metadata: None,
};
entries.push(text_entry);
}
}
if !parse_errors.is_empty() {
tracing::warn!(
"Gemini normalize_logs encountered {} parse errors: {}",
parse_errors.len(),
parse_errors.join("; ")
);
}
tracing::debug!(
"Gemini normalize_logs processed {} lines, created {} entries",
logs.lines().count(),
entries.len()
);
Ok(NormalizedConversation {
entries,
session_id: None, // Session ID is managed directly via database, not extracted from logs
executor_type: "gemini".to_string(),
prompt: None,
summary: None,
})
}
// Note: Gemini streaming is handled by the Gemini-specific WAL system.
// See emit_content_batch() method which calls GeminiExecutor::push_patch().
}
impl GeminiExecutor {
/// Create a standardized Gemini CLI command
fn create_gemini_command(worktree_path: &str) -> CommandRunner {
let (shell_cmd, shell_arg) = get_shell_command();
let gemini_command = "npx @google/gemini-cli@latest --yolo";
let mut command = CommandRunner::new();
command
.command(shell_cmd)
.arg(shell_arg)
.arg(gemini_command)
.working_dir(worktree_path)
.env("NODE_NO_WARNINGS", "1");
command
}
/// Update executor session ID with error handling
async fn update_session_id(
pool: &sqlx::SqlitePool,
execution_process_id: Uuid,
session_id: &str,
) {
if let Err(e) = crate::models::executor_session::ExecutorSession::update_session_id(
pool,
execution_process_id,
session_id,
)
.await
{
tracing::error!(
"Failed to update session ID for Gemini execution process {}: {}",
execution_process_id,
e
);
} else {
tracing::info!(
"Updated session ID {} for Gemini execution process {}",
session_id,
execution_process_id
);
}
}
/// Setup streaming for both stdout and stderr
async fn setup_streaming(
pool: &sqlx::SqlitePool,
proc: &mut CommandProcess,
attempt_id: Uuid,
execution_process_id: Uuid,
) {
// Get stdout and stderr streams from CommandProcess
let mut stream = proc
.stream()
.await
.expect("Failed to get streams from command process");
let stdout = stream
.stdout
.take()
.expect("Failed to get stdout from command stream");
let stderr = stream
.stderr
.take()
.expect("Failed to get stderr from command stream");
// Start streaming tasks with Gemini-specific line-based message updates
let pool_clone1 = pool.clone();
let pool_clone2 = pool.clone();
tokio::spawn(Self::stream_gemini_chunked(
stdout,
pool_clone1,
attempt_id,
execution_process_id,
));
// Use default stderr streaming (no custom parsing)
tokio::spawn(crate::executor::stream_output_to_db(
stderr,
pool_clone2,
attempt_id,
execution_process_id,
false,
));
}
/// Push patches to the Gemini WAL system
pub fn push_patch(execution_process_id: Uuid, patches: Vec<Value>, content_length: usize) {
GeminiStreaming::push_patch(execution_process_id, patches, content_length);
}
/// Get WAL batches for an execution process, optionally filtering by cursor
pub fn get_wal_batches(
execution_process_id: Uuid,
after_batch_id: Option<u64>,
) -> Option<Vec<GeminiPatchBatch>> {
GeminiStreaming::get_wal_batches(execution_process_id, after_batch_id)
}
/// Clean up WAL when execution process finishes
pub async fn finalize_execution(
pool: &sqlx::SqlitePool,
execution_process_id: Uuid,
final_buffer: &str,
) {
GeminiStreaming::finalize_execution(pool, execution_process_id, final_buffer).await;
}
/// Find the best boundary to split a chunk (newline preferred, sentence fallback)
pub fn find_chunk_boundary(buffer: &str, max_size: usize) -> usize {
GeminiStreaming::find_chunk_boundary(buffer, max_size)
}
/// Conditionally flush accumulated content to database in chunks
pub async fn maybe_flush_chunk(
pool: &sqlx::SqlitePool,
execution_process_id: Uuid,
buffer: &mut String,
config: &GeminiStreamConfig,
) {
GeminiStreaming::maybe_flush_chunk(pool, execution_process_id, buffer, config).await;
}
/// Emit JSON patch for current message state - either "replace" for growing message or "add" for new message.
fn emit_message_patch(
execution_process_id: Uuid,
current_message: &str,
entry_count: &mut usize,
force_new_message: bool,
) {
if current_message.is_empty() {
return;
}
if force_new_message && *entry_count > 0 {
// Start new message: add new entry to array
*entry_count += 1;
let patch_vec = vec![serde_json::json!({
"op": "add",
"path": format!("/entries/{}", *entry_count - 1),
"value": {
"timestamp": chrono::Utc::now().to_rfc3339(),
"entry_type": {"type": "assistant_message"},
"content": current_message,
"metadata": null,
}
})];
Self::push_patch(execution_process_id, patch_vec, current_message.len());
} else {
// Growing message: replace current entry
if *entry_count == 0 {
*entry_count = 1; // Initialize first message
}
let patch_vec = vec![serde_json::json!({
"op": "replace",
"path": format!("/entries/{}", *entry_count - 1),
"value": {
"timestamp": chrono::Utc::now().to_rfc3339(),
"entry_type": {"type": "assistant_message"},
"content": current_message,
"metadata": null,
}
})];
Self::push_patch(execution_process_id, patch_vec, current_message.len());
}
}
/// Emit final content when stream ends
async fn emit_final_content(
execution_process_id: Uuid,
remaining_content: &str,
entry_count: &mut usize,
) {
if !remaining_content.trim().is_empty() {
Self::emit_message_patch(
execution_process_id,
remaining_content,
entry_count,
false, // Don't force new message for final content
);
}
}
async fn load_task(
&self,
pool: &sqlx::SqlitePool,
task_id: Uuid,
) -> Result<Task, ExecutorError> {
Task::find_by_id(pool, task_id)
.await?
.ok_or(ExecutorError::TaskNotFound)
}
async fn collect_resume_context(
&self,
pool: &sqlx::SqlitePool,
task: &Task,
attempt_id: Uuid,
) -> Result<crate::models::task_attempt::AttemptResumeContext, ExecutorError> {
crate::models::task_attempt::TaskAttempt::get_attempt_resume_context(
pool,
attempt_id,
task.id,
task.project_id,
)
.await
.map_err(ExecutorError::from)
}
fn build_comprehensive_prompt(
&self,
task: &Task,
resume_context: &crate::models::task_attempt::AttemptResumeContext,
prompt: &str,
) -> String {
format!(
r#"RESUME CONTEXT FOR CONTINUING TASK
=== TASK INFORMATION ===
Project ID: {}
Task ID: {}
Task Title: {}
Task Description: {}
=== EXECUTION HISTORY ===
The following is the execution history from this task attempt:
{}
=== CURRENT CHANGES ===
The following git diff shows changes made from the base branch to the current state:
```diff
{}
```
=== CURRENT REQUEST ===
{}
=== INSTRUCTIONS ===
You are continuing work on the above task. The execution history shows what has been done previously, and the git diff shows the current state of all changes. Please continue from where the previous execution left off, taking into account all the context provided above.
"#,
task.project_id,
task.id,
task.title,
task.description
.as_deref()
.unwrap_or("No description provided"),
if resume_context.execution_history.trim().is_empty() {
"(No previous execution history)"
} else {
&resume_context.execution_history
},
if resume_context.cumulative_diffs.trim().is_empty() {
"(No changes detected)"
} else {
&resume_context.cumulative_diffs
},
prompt
)
}
async fn spawn_process(
&self,
worktree_path: &str,
comprehensive_prompt: &str,
attempt_id: Uuid,
) -> Result<CommandProcess, ExecutorError> {
tracing::info!(
"Spawning Gemini followup execution for attempt {} with resume context ({} chars)",
attempt_id,
comprehensive_prompt.len()
);
let mut command = GeminiExecutor::create_gemini_command(worktree_path);
command.stdin(comprehensive_prompt);
let proc = command.start().await.map_err(|e| {
crate::executor::SpawnContext::from_command(&command, "Gemini")
.with_context(format!(
"Gemini CLI followup execution with context for attempt {}",
attempt_id
))
.spawn_error(e)
})?;
tracing::info!(
"Successfully started Gemini followup process for attempt {}",
attempt_id
);
Ok(proc)
}
/// Format Gemini CLI output by inserting line breaks where periods are directly
/// followed by capital letters (common Gemini CLI formatting issue).
/// Handles both intra-chunk and cross-chunk period-to-capital transitions.
fn format_gemini_output(content: &str, accumulated_message: &str) -> String {
let mut result = String::with_capacity(content.len() + 100); // Reserve some extra space for potential newlines
let chars: Vec<char> = content.chars().collect();
// Check for cross-chunk boundary: previous chunk ended with period, current starts with capital
if !accumulated_message.is_empty() && !content.is_empty() {
let ends_with_period = accumulated_message.ends_with('.');
let starts_with_capital = chars
.first()
.map(|&c| c.is_uppercase() && c.is_alphabetic())
.unwrap_or(false);
if ends_with_period && starts_with_capital {
result.push('\n');
}
}
// Handle intra-chunk period-to-capital transitions
for i in 0..chars.len() {
result.push(chars[i]);
// Check if current char is '.' and next char is uppercase letter (no space between)
if chars[i] == '.' && i + 1 < chars.len() {
let next_char = chars[i + 1];
if next_char.is_uppercase() && next_char.is_alphabetic() {
result.push('\n');
}
}
}
result
}
/// Stream Gemini output with dual-buffer approach: chunks for UI updates, messages for storage.
///
/// **Chunks** (~2KB): Frequent UI updates using "replace" patches for smooth streaming
/// **Messages** (~8KB): Logical boundaries using "add" patches for new entries
/// **Consistent WAL/DB**: Both systems see same message structure via JSON patches
pub async fn stream_gemini_chunked(
mut output: impl tokio::io::AsyncRead + Unpin,
pool: sqlx::SqlitePool,
attempt_id: Uuid,
execution_process_id: Uuid,
) {
use tokio::io::{AsyncReadExt, BufReader};
let chunk_limit = max_chunk_size();
let display_chunk_size = max_display_size(); // ~2KB for UI updates
let message_boundary_size = max_message_size(); // ~8KB for new message boundaries
let max_latency = std::time::Duration::from_millis(max_latency_ms());
let mut reader = BufReader::new(&mut output);
// Dual buffers: chunk buffer for UI, message buffer for DB
let mut current_message = String::new(); // Current assistant message content
let mut db_buffer = String::new(); // Buffer for database storage (using ChunkStore)
let mut entry_count = 0usize; // Track assistant message entries
let mut read_buf = vec![0u8; chunk_limit.min(max_chunk_size())]; // Use configurable chunk limit, capped for memory efficiency
let mut last_chunk_emit = Instant::now();
// Configuration for WAL and DB management
let config = GeminiStreamConfig::default();
tracing::info!(
"Starting dual-buffer Gemini streaming for attempt {} (chunks: {}B, messages: {}B)",
attempt_id,
display_chunk_size,
message_boundary_size
);
loop {
match reader.read(&mut read_buf).await {
Ok(0) => {
// EOF: emit final content and flush to database
Self::emit_final_content(
execution_process_id,
&current_message,
&mut entry_count,
)
.await;
// Flush any remaining database buffer
Self::finalize_execution(&pool, execution_process_id, &db_buffer).await;
break;
}
Ok(n) => {
// Convert bytes to string and apply Gemini-specific formatting
let raw_chunk = String::from_utf8_lossy(&read_buf[..n]);
let formatted_chunk = Self::format_gemini_output(&raw_chunk, &current_message);
// Add to both buffers
current_message.push_str(&formatted_chunk);
db_buffer.push_str(&formatted_chunk);
// 1. Check for chunk emission (frequent UI updates ~2KB)
let should_emit_chunk = current_message.len() >= display_chunk_size
|| (last_chunk_emit.elapsed() >= max_latency
&& !current_message.is_empty());
if should_emit_chunk {
// Emit "replace" patch for growing message (smooth UI)
Self::emit_message_patch(
execution_process_id,
&current_message,
&mut entry_count,
false, // Not forcing new message
);
last_chunk_emit = Instant::now();
}
// 2. Check for message boundary (new assistant message ~8KB)
let should_start_new_message = current_message.len() >= message_boundary_size;
if should_start_new_message {
// Find optimal boundary for new message
let boundary =
Self::find_chunk_boundary(&current_message, message_boundary_size);
if boundary > 0 && boundary < current_message.len() {
// Split at boundary: complete current message, start new one
let completed_message = current_message[..boundary].to_string();
let remaining_content = current_message[boundary..].to_string();
// CRITICAL FIX: Only emit "replace" patch to complete current message
// Do NOT emit "add" patch as it shifts existing database entries
Self::emit_message_patch(
execution_process_id,
&completed_message,
&mut entry_count,
false, // Complete current message
);
// Store the completed message to database
// This ensures the database gets the completed content at the boundary
Self::maybe_flush_chunk(
&pool,
execution_process_id,
&mut db_buffer,
&config,
)
.await;
// Start fresh message with remaining content (no WAL patch yet)
// Next chunk emission will create "replace" patch for entry_count + 1
current_message = remaining_content;
entry_count += 1; // Move to next entry index for future patches
}
}
// 3. Flush to database (same boundary detection)
Self::maybe_flush_chunk(&pool, execution_process_id, &mut db_buffer, &config)
.await;
}
Err(e) => {
tracing::error!(
"Error reading stdout for Gemini attempt {}: {}",
attempt_id,
e
);
break;
}
}
}
tracing::info!(
"Dual-buffer Gemini streaming completed for attempt {} ({} messages)",
attempt_id,
entry_count
);
}
}

View File

@@ -1,67 +0,0 @@
//! Gemini executor configuration and environment variable resolution
//!
//! This module contains configuration structures and functions for the Gemini executor,
//! including environment variable resolution for runtime parameters.
/// Configuration for Gemini WAL compaction and DB chunking
#[derive(Debug, Clone)]
pub struct GeminiStreamConfig {
pub max_db_chunk_size: usize,
pub wal_compaction_threshold: usize,
pub wal_compaction_size: usize,
pub wal_compaction_interval_ms: u64,
pub max_wal_batches: usize,
pub max_wal_total_size: usize,
}
impl Default for GeminiStreamConfig {
fn default() -> Self {
Self {
max_db_chunk_size: max_message_size(),
wal_compaction_threshold: 40,
wal_compaction_size: max_message_size() * 2,
wal_compaction_interval_ms: 30000,
max_wal_batches: 100,
max_wal_total_size: 1024 * 1024, // 1MB per process
}
}
}
// Constants for configuration
/// Size-based streaming configuration
pub const DEFAULT_MAX_CHUNK_SIZE: usize = 5120; // bytes (read buffer size)
pub const DEFAULT_MAX_DISPLAY_SIZE: usize = 2000; // bytes (SSE emission threshold for smooth UI)
pub const DEFAULT_MAX_MESSAGE_SIZE: usize = 8000; // bytes (message boundary for new assistant entries)
pub const DEFAULT_MAX_LATENCY_MS: u64 = 50; // milliseconds
/// Resolve MAX_CHUNK_SIZE from env or fallback
pub fn max_chunk_size() -> usize {
std::env::var("GEMINI_CLI_MAX_CHUNK_SIZE")
.ok()
.and_then(|v| v.parse::<usize>().ok())
.unwrap_or(DEFAULT_MAX_CHUNK_SIZE)
}
/// Resolve MAX_DISPLAY_SIZE from env or fallback
pub fn max_display_size() -> usize {
std::env::var("GEMINI_CLI_MAX_DISPLAY_SIZE")
.ok()
.and_then(|v| v.parse::<usize>().ok())
.unwrap_or(DEFAULT_MAX_DISPLAY_SIZE)
}
/// Resolve MAX_MESSAGE_SIZE from env or fallback
pub fn max_message_size() -> usize {
std::env::var("GEMINI_CLI_MAX_MESSAGE_SIZE")
.ok()
.and_then(|v| v.parse::<usize>().ok())
.unwrap_or(DEFAULT_MAX_MESSAGE_SIZE)
}
/// Resolve MAX_LATENCY_MS from env or fallback
pub fn max_latency_ms() -> u64 {
std::env::var("GEMINI_CLI_MAX_LATENCY_MS")
.ok()
.and_then(|v| v.parse::<u64>().ok())
.unwrap_or(DEFAULT_MAX_LATENCY_MS)
}

View File

@@ -1,363 +0,0 @@
//! Gemini streaming functionality with WAL and chunked storage
//!
//! This module provides real-time streaming support for Gemini execution processes
//! with Write-Ahead Log (WAL) capabilities for resumable streaming.
use std::{collections::HashMap, sync::Mutex, time::Instant};
use json_patch::{patch, Patch, PatchOperation};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use uuid::Uuid;
use super::config::GeminiStreamConfig;
use crate::{
executor::{NormalizedEntry, NormalizedEntryType},
models::execution_process::ExecutionProcess,
};
lazy_static::lazy_static! {
/// Write-Ahead Log: Maps execution_process_id → WAL state (Gemini-specific)
static ref GEMINI_WAL_MAP: Mutex<HashMap<Uuid, GeminiWalState>> = Mutex::new(HashMap::new());
}
/// A batch of JSON patches for Gemini streaming
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct GeminiPatchBatch {
/// Monotonic batch identifier for cursor-based streaming
pub batch_id: u64,
/// Array of JSON Patch operations (RFC 6902 format)
pub patches: Vec<Value>,
/// ISO 8601 timestamp when this batch was created
pub timestamp: String,
/// Total content length after applying all patches in this batch
pub content_length: usize,
}
/// WAL state for a single Gemini execution process
#[derive(Debug)]
pub struct GeminiWalState {
pub batches: Vec<GeminiPatchBatch>,
pub total_content_length: usize,
pub next_batch_id: u64,
pub last_compaction: Instant,
pub last_db_flush: Instant,
pub last_access: Instant,
}
impl Default for GeminiWalState {
fn default() -> Self {
Self::new()
}
}
impl GeminiWalState {
pub fn new() -> Self {
let now = Instant::now();
Self {
batches: Vec::new(),
total_content_length: 0,
next_batch_id: 1,
last_compaction: now,
last_db_flush: now,
last_access: now,
}
}
}
/// Gemini streaming utilities
pub struct GeminiStreaming;
impl GeminiStreaming {
/// Push patches to the Gemini WAL system
pub fn push_patch(execution_process_id: Uuid, patches: Vec<Value>, content_length: usize) {
let mut wal_map = GEMINI_WAL_MAP.lock().unwrap();
let wal_state = wal_map.entry(execution_process_id).or_default();
let config = GeminiStreamConfig::default();
// Update access time for orphan cleanup
wal_state.last_access = Instant::now();
// Enforce size limits - force compaction instead of clearing to prevent data loss
if wal_state.batches.len() >= config.max_wal_batches
|| wal_state.total_content_length >= config.max_wal_total_size
{
tracing::warn!(
"WAL size limits exceeded for process {} (batches: {}, size: {}), forcing compaction",
execution_process_id,
wal_state.batches.len(),
wal_state.total_content_length
);
// Force compaction to preserve data instead of losing it
Self::compact_wal(wal_state);
// If still over limits after compaction, keep only the most recent batches
if wal_state.batches.len() >= config.max_wal_batches {
let keep_count = config.max_wal_batches / 2; // Keep half
let remove_count = wal_state.batches.len() - keep_count;
wal_state.batches.drain(..remove_count);
tracing::warn!(
"After compaction still over limit, kept {} most recent batches",
keep_count
);
}
}
let batch = GeminiPatchBatch {
batch_id: wal_state.next_batch_id,
patches,
timestamp: chrono::Utc::now().to_rfc3339(),
content_length,
};
wal_state.next_batch_id += 1;
wal_state.batches.push(batch);
wal_state.total_content_length = content_length;
// Check if compaction is needed
if Self::should_compact(wal_state, &config) {
Self::compact_wal(wal_state);
}
}
/// Get WAL batches for an execution process, optionally filtering by cursor
pub fn get_wal_batches(
execution_process_id: Uuid,
after_batch_id: Option<u64>,
) -> Option<Vec<GeminiPatchBatch>> {
GEMINI_WAL_MAP.lock().ok().and_then(|mut wal_map| {
wal_map.get_mut(&execution_process_id).map(|wal_state| {
// Update access time when WAL is retrieved
wal_state.last_access = Instant::now();
match after_batch_id {
Some(cursor) => {
// Return only batches with batch_id > cursor
wal_state
.batches
.iter()
.filter(|batch| batch.batch_id > cursor)
.cloned()
.collect()
}
None => {
// Return all batches
wal_state.batches.clone()
}
}
})
})
}
/// Clean up WAL when execution process finishes
pub async fn finalize_execution(
pool: &sqlx::SqlitePool,
execution_process_id: Uuid,
final_buffer: &str,
) {
// Flush any remaining content to database
if !final_buffer.trim().is_empty() {
Self::store_chunk_to_db(pool, execution_process_id, final_buffer).await;
}
// Remove WAL entry
Self::purge_wal(execution_process_id);
}
/// Remove WAL entry for a specific execution process
pub fn purge_wal(execution_process_id: Uuid) {
if let Ok(mut wal_map) = GEMINI_WAL_MAP.lock() {
wal_map.remove(&execution_process_id);
tracing::debug!(
"Cleaned up WAL for execution process {}",
execution_process_id
);
}
}
/// Find the best boundary to split a chunk (newline preferred, sentence fallback)
pub fn find_chunk_boundary(buffer: &str, max_size: usize) -> usize {
if buffer.len() <= max_size {
return buffer.len();
}
let search_window = &buffer[..max_size];
// First preference: newline boundary
if let Some(pos) = search_window.rfind('\n') {
return pos + 1; // Include the newline
}
// Second preference: sentence boundary (., !, ?)
if let Some(pos) = search_window.rfind(&['.', '!', '?'][..]) {
if pos + 1 < search_window.len() {
return pos + 1;
}
}
// Fallback: word boundary
if let Some(pos) = search_window.rfind(' ') {
return pos + 1;
}
// Last resort: split at max_size
max_size
}
/// Store a chunk to the database
async fn store_chunk_to_db(pool: &sqlx::SqlitePool, execution_process_id: Uuid, content: &str) {
if content.trim().is_empty() {
return;
}
let entry = NormalizedEntry {
timestamp: Some(chrono::Utc::now().to_rfc3339()),
entry_type: NormalizedEntryType::AssistantMessage,
content: content.to_string(),
metadata: None,
};
match serde_json::to_string(&entry) {
Ok(jsonl_line) => {
let formatted_line = format!("{}\n", jsonl_line);
if let Err(e) =
ExecutionProcess::append_stdout(pool, execution_process_id, &formatted_line)
.await
{
tracing::error!("Failed to store chunk to database: {}", e);
} else {
tracing::debug!("Stored {}B chunk to database", content.len());
}
}
Err(e) => {
tracing::error!("Failed to serialize chunk: {}", e);
}
}
}
/// Conditionally flush accumulated content to database in chunks
pub async fn maybe_flush_chunk(
pool: &sqlx::SqlitePool,
execution_process_id: Uuid,
buffer: &mut String,
config: &GeminiStreamConfig,
) {
if buffer.len() < config.max_db_chunk_size {
return;
}
// Find the best split point (newline preferred, sentence boundary fallback)
let split_point = Self::find_chunk_boundary(buffer, config.max_db_chunk_size);
if split_point > 0 {
let chunk = buffer[..split_point].to_string();
buffer.drain(..split_point);
// Store chunk to database
Self::store_chunk_to_db(pool, execution_process_id, &chunk).await;
// Update WAL flush time
if let Ok(mut wal_map) = GEMINI_WAL_MAP.lock() {
if let Some(wal_state) = wal_map.get_mut(&execution_process_id) {
wal_state.last_db_flush = Instant::now();
}
}
}
}
/// Check if WAL compaction is needed based on configured thresholds
fn should_compact(wal_state: &GeminiWalState, config: &GeminiStreamConfig) -> bool {
wal_state.batches.len() >= config.wal_compaction_threshold
|| wal_state.total_content_length >= config.wal_compaction_size
|| wal_state.last_compaction.elapsed().as_millis() as u64
>= config.wal_compaction_interval_ms
}
/// Compact WAL by losslessly merging older patches into a snapshot
fn compact_wal(wal_state: &mut GeminiWalState) {
// Need at least a few batches to make compaction worthwhile
if wal_state.batches.len() <= 5 {
return;
}
// Keep the most recent 3 batches for smooth incremental updates
let recent_count = 3;
let compact_count = wal_state.batches.len() - recent_count;
if compact_count <= 1 {
return; // Not enough to compact
}
// Start with an empty conversation and apply all patches sequentially
let mut conversation_value = serde_json::json!({
"entries": [],
"session_id": null,
"executor_type": "gemini",
"prompt": null,
"summary": null
});
let mut total_content_length = 0;
let oldest_batch_id = wal_state.batches[0].batch_id;
let compact_timestamp = chrono::Utc::now().to_rfc3339();
// Apply patches from oldest to newest (excluding recent ones) using json-patch crate
for batch in &wal_state.batches[..compact_count] {
// Convert Vec<Value> to json_patch::Patch
let patch_operations: Result<Vec<PatchOperation>, _> = batch
.patches
.iter()
.map(|p| serde_json::from_value(p.clone()))
.collect();
match patch_operations {
Ok(ops) => {
let patch_obj = Patch(ops);
if let Err(e) = patch(&mut conversation_value, &patch_obj) {
tracing::warn!("Failed to apply patch during compaction: {}, skipping", e);
continue;
}
}
Err(e) => {
tracing::warn!("Failed to deserialize patch operations: {}, skipping", e);
continue;
}
}
total_content_length = batch.content_length; // Use the final length
}
// Extract the final entries array for the snapshot
let final_entries = conversation_value
.get("entries")
.and_then(|v| v.as_array())
.cloned()
.unwrap_or_default();
// Create a single snapshot patch that replaces the entire entries array
let snapshot_patch = GeminiPatchBatch {
batch_id: oldest_batch_id, // Use the oldest batch_id to maintain cursor compatibility
patches: vec![serde_json::json!({
"op": "replace",
"path": "/entries",
"value": final_entries
})],
timestamp: compact_timestamp,
content_length: total_content_length,
};
// Replace old batches with snapshot + keep recent batches
let mut new_batches = vec![snapshot_patch];
new_batches.extend_from_slice(&wal_state.batches[compact_count..]);
wal_state.batches = new_batches;
wal_state.last_compaction = Instant::now();
tracing::info!(
"Losslessly compacted WAL: {} batches → {} (1 snapshot + {} recent), preserving all content",
compact_count + recent_count,
wal_state.batches.len(),
recent_count
);
}
}

View File

@@ -1,25 +0,0 @@
pub mod aider;
pub mod amp;
pub mod ccr;
pub mod charm_opencode;
pub mod claude;
pub mod cleanup_script;
pub mod codex;
pub mod dev_server;
pub mod echo;
pub mod gemini;
pub mod setup_script;
pub mod sst_opencode;
pub use aider::AiderExecutor;
pub use amp::AmpExecutor;
pub use ccr::CCRExecutor;
pub use charm_opencode::CharmOpencodeExecutor;
pub use claude::ClaudeExecutor;
pub use cleanup_script::CleanupScriptExecutor;
pub use codex::CodexExecutor;
pub use dev_server::DevServerExecutor;
pub use echo::EchoExecutor;
pub use gemini::GeminiExecutor;
pub use setup_script::SetupScriptExecutor;
pub use sst_opencode::SstOpencodeExecutor;

View File

@@ -1,127 +0,0 @@
use async_trait::async_trait;
use uuid::Uuid;
use crate::{
command_runner::{CommandProcess, CommandRunner},
executor::{Executor, ExecutorError},
models::{project::Project, task::Task},
utils::shell::get_shell_command,
};
/// Executor for running project setup scripts
pub struct SetupScriptExecutor {
pub script: String,
}
impl SetupScriptExecutor {
pub fn new(script: String) -> Self {
Self { script }
}
}
#[async_trait]
impl Executor for SetupScriptExecutor {
async fn spawn(
&self,
pool: &sqlx::SqlitePool,
task_id: Uuid,
worktree_path: &str,
) -> Result<CommandProcess, ExecutorError> {
// Validate the task and project exist
let task = Task::find_by_id(pool, task_id)
.await?
.ok_or(ExecutorError::TaskNotFound)?;
let _project = Project::find_by_id(pool, task.project_id)
.await?
.ok_or(ExecutorError::TaskNotFound)?; // Reuse TaskNotFound for simplicity
let (shell_cmd, shell_arg) = get_shell_command();
let mut command = CommandRunner::new();
command
.command(shell_cmd)
.arg(shell_arg)
.arg(&self.script)
.working_dir(worktree_path);
let proc = command.start().await.map_err(|e| {
crate::executor::SpawnContext::from_command(&command, "SetupScript")
.with_task(task_id, Some(task.title.clone()))
.with_context("Setup script execution")
.spawn_error(e)
})?;
Ok(proc)
}
/// Normalize setup script logs into a readable format
fn normalize_logs(
&self,
logs: &str,
_worktree_path: &str,
) -> Result<crate::executor::NormalizedConversation, String> {
let mut entries = Vec::new();
// Add script command as first entry
entries.push(crate::executor::NormalizedEntry {
timestamp: None,
entry_type: crate::executor::NormalizedEntryType::SystemMessage,
content: format!("Executing setup script:\n{}", self.script),
metadata: None,
});
// Process the logs - split by lines and create entries
if !logs.trim().is_empty() {
let lines: Vec<&str> = logs.lines().collect();
let mut current_chunk = String::new();
for line in lines {
current_chunk.push_str(line);
current_chunk.push('\n');
// Create entry for every 10 lines or when we encounter an error-like line
if current_chunk.lines().count() >= 10
|| line.to_lowercase().contains("error")
|| line.to_lowercase().contains("failed")
|| line.to_lowercase().contains("exception")
{
let entry_type = if line.to_lowercase().contains("error")
|| line.to_lowercase().contains("failed")
|| line.to_lowercase().contains("exception")
{
crate::executor::NormalizedEntryType::ErrorMessage
} else {
crate::executor::NormalizedEntryType::SystemMessage
};
entries.push(crate::executor::NormalizedEntry {
timestamp: Some(chrono::Utc::now().to_rfc3339()),
entry_type,
content: current_chunk.trim().to_string(),
metadata: None,
});
current_chunk.clear();
}
}
// Add any remaining content
if !current_chunk.trim().is_empty() {
entries.push(crate::executor::NormalizedEntry {
timestamp: Some(chrono::Utc::now().to_rfc3339()),
entry_type: crate::executor::NormalizedEntryType::SystemMessage,
content: current_chunk.trim().to_string(),
metadata: None,
});
}
}
Ok(crate::executor::NormalizedConversation {
entries,
session_id: None,
executor_type: "setup-script".to_string(),
prompt: Some(self.script.clone()),
summary: None,
})
}
}

View File

@@ -1,694 +0,0 @@
use async_trait::async_trait;
use serde_json::{json, Value};
use tokio::io::{AsyncBufReadExt, BufReader};
use uuid::Uuid;
use crate::{
command_runner::{CommandProcess, CommandRunner},
executor::{Executor, ExecutorError, NormalizedConversation, NormalizedEntry},
models::{execution_process::ExecutionProcess, executor_session::ExecutorSession, task::Task},
utils::shell::get_shell_command,
};
// Sub-modules for utilities
pub mod filter;
pub mod tools;
use self::{
filter::{parse_session_id_from_line, tool_usage_regex, OpenCodeFilter},
tools::{determine_action_type, generate_tool_content, normalize_tool_name},
};
struct Content {
pub stdout: Option<String>,
pub stderr: Option<String>,
}
/// Process a single line for session extraction and content formatting
async fn process_line_for_content(
line: &str,
session_extracted: &mut bool,
worktree_path: &str,
pool: &sqlx::SqlitePool,
execution_process_id: uuid::Uuid,
) -> Option<Content> {
if !*session_extracted {
if let Some(session_id) = parse_session_id_from_line(line) {
if let Err(e) =
ExecutorSession::update_session_id(pool, execution_process_id, &session_id).await
{
tracing::error!(
"Failed to update session ID for execution process {}: {}",
execution_process_id,
e
);
} else {
tracing::info!(
"Updated session ID {} for execution process {}",
session_id,
execution_process_id
);
*session_extracted = true;
}
// Don't return any content for session lines
return None;
}
}
// Check if line is noise - if so, discard it
if OpenCodeFilter::is_noise(line) {
return None;
}
if OpenCodeFilter::is_stderr(line) {
// If it's stderr, we don't need to process it further
return Some(Content {
stdout: None,
stderr: Some(line.to_string()),
});
}
// Format clean content as normalized JSON
let formatted = format_opencode_content_as_normalized_json(line, worktree_path);
Some(Content {
stdout: Some(formatted),
stderr: None,
})
}
/// Stream stderr from OpenCode process with filtering to separate clean output from noise
pub async fn stream_opencode_stderr_to_db(
output: impl tokio::io::AsyncRead + Unpin,
pool: sqlx::SqlitePool,
attempt_id: Uuid,
execution_process_id: Uuid,
worktree_path: String,
) {
let mut reader = BufReader::new(output);
let mut line = String::new();
let mut session_extracted = false;
loop {
line.clear();
match reader.read_line(&mut line).await {
Ok(0) => break, // EOF
Ok(_) => {
line = line.trim_end_matches(['\r', '\n']).to_string();
let content = process_line_for_content(
&line,
&mut session_extracted,
&worktree_path,
&pool,
execution_process_id,
)
.await;
if let Some(Content { stdout, stderr }) = content {
tracing::debug!(
"Processed OpenCode content for attempt {}: stdout={:?} stderr={:?}",
attempt_id,
stdout,
stderr,
);
if let Err(e) = ExecutionProcess::append_output(
&pool,
execution_process_id,
stdout.as_deref(),
stderr.as_deref(),
)
.await
{
tracing::error!(
"Failed to write OpenCode line for attempt {}: {}",
attempt_id,
e
);
}
}
}
Err(e) => {
tracing::error!("Error reading stderr for attempt {}: {}", attempt_id, e);
break;
}
}
}
}
/// Format OpenCode clean content as normalized JSON entries for direct database storage
fn format_opencode_content_as_normalized_json(content: &str, worktree_path: &str) -> String {
let mut results = Vec::new();
let base_timestamp = chrono::Utc::now();
let mut entry_counter = 0u32;
for line in content.lines() {
let trimmed = line.trim();
if trimmed.is_empty() {
continue;
}
// Generate unique timestamp for each entry by adding microseconds
let unique_timestamp =
base_timestamp + chrono::Duration::microseconds(entry_counter as i64);
let timestamp_str = unique_timestamp.to_rfc3339_opts(chrono::SecondsFormat::Micros, true);
entry_counter += 1;
// Try to parse as existing JSON first
if let Ok(parsed_json) = serde_json::from_str::<Value>(trimmed) {
results.push(parsed_json.to_string());
continue;
}
// Strip ANSI codes before processing
let cleaned = OpenCodeFilter::strip_ansi_codes(trimmed);
let cleaned_trim = cleaned.trim();
if cleaned_trim.is_empty() {
continue;
}
// Check for tool usage patterns after ANSI stripping: | ToolName {...}
if let Some(captures) = tool_usage_regex().captures(cleaned_trim) {
if let (Some(tool_name), Some(tool_input)) = (captures.get(1), captures.get(2)) {
// Parse tool input
let input: serde_json::Value =
serde_json::from_str(tool_input.as_str()).unwrap_or(serde_json::Value::Null);
// Normalize tool name for frontend compatibility (e.g., "Todo" → "todowrite")
let normalized_tool_name = normalize_tool_name(tool_name.as_str());
let normalized_entry = json!({
"timestamp": timestamp_str,
"entry_type": {
"type": "tool_use",
"tool_name": normalized_tool_name,
"action_type": determine_action_type(&normalized_tool_name, &input, worktree_path)
},
"content": generate_tool_content(&normalized_tool_name, &input, worktree_path),
"metadata": input
});
results.push(normalized_entry.to_string());
continue;
}
}
// Regular assistant message
let normalized_entry = json!({
"timestamp": timestamp_str,
"entry_type": {
"type": "assistant_message"
},
"content": cleaned_trim,
"metadata": null
});
results.push(normalized_entry.to_string());
}
// Ensure each JSON entry is on its own line
results.join("\n") + "\n"
}
/// An executor that uses SST Opencode CLI to process tasks
pub struct SstOpencodeExecutor {
executor_type: String,
command: String,
}
impl Default for SstOpencodeExecutor {
fn default() -> Self {
Self::new()
}
}
impl SstOpencodeExecutor {
/// Create a new SstOpencodeExecutor with default settings
pub fn new() -> Self {
Self {
executor_type: "SST Opencode".to_string(),
command: "npx -y opencode-ai@latest run --print-logs".to_string(),
}
}
}
/// An executor that resumes an SST Opencode session
#[async_trait]
impl Executor for SstOpencodeExecutor {
async fn spawn(
&self,
pool: &sqlx::SqlitePool,
task_id: Uuid,
worktree_path: &str,
) -> Result<CommandProcess, ExecutorError> {
// Get the task to fetch its description
let task = Task::find_by_id(pool, task_id)
.await?
.ok_or(ExecutorError::TaskNotFound)?;
let prompt = if let Some(task_description) = task.description {
format!(
r#"project_id: {}
Task title: {}
Task description: {}"#,
task.project_id, task.title, task_description
)
} else {
format!(
r#"project_id: {}
Task title: {}"#,
task.project_id, task.title
)
};
// Use shell command for cross-platform compatibility
let (shell_cmd, shell_arg) = get_shell_command();
let opencode_command = &self.command;
let mut command = CommandRunner::new();
command
.command(shell_cmd)
.arg(shell_arg)
.arg(opencode_command)
.stdin(&prompt)
.working_dir(worktree_path)
.env("NODE_NO_WARNINGS", "1");
let proc = command.start().await.map_err(|e| {
crate::executor::SpawnContext::from_command(&command, &self.executor_type)
.with_task(task_id, Some(task.title.clone()))
.with_context(format!("{} CLI execution for new task", self.executor_type))
.spawn_error(e)
})?;
Ok(proc)
}
/// Execute with OpenCode filtering for stderr
async fn execute_streaming(
&self,
pool: &sqlx::SqlitePool,
task_id: Uuid,
attempt_id: Uuid,
execution_process_id: Uuid,
worktree_path: &str,
) -> Result<CommandProcess, ExecutorError> {
let mut proc = self.spawn(pool, task_id, worktree_path).await?;
// Get stderr stream from CommandProcess for OpenCode filtering
let mut stream = proc
.stream()
.await
.expect("Failed to get streams from command process");
let stderr = stream
.stderr
.take()
.expect("Failed to get stderr from command stream");
// Start OpenCode stderr filtering task
let pool_clone = pool.clone();
let worktree_path_clone = worktree_path.to_string();
tokio::spawn(stream_opencode_stderr_to_db(
stderr,
pool_clone,
attempt_id,
execution_process_id,
worktree_path_clone,
));
Ok(proc)
}
fn normalize_logs(
&self,
logs: &str,
_worktree_path: &str,
) -> Result<NormalizedConversation, String> {
let mut entries = Vec::new();
for line in logs.lines() {
let trimmed = line.trim();
if trimmed.is_empty() {
continue;
}
// Simple passthrough: directly deserialize normalized JSON entries
if let Ok(entry) = serde_json::from_str::<NormalizedEntry>(trimmed) {
entries.push(entry);
}
}
Ok(NormalizedConversation {
entries,
session_id: None, // Session ID is stored directly in the database
executor_type: "sst-opencode".to_string(),
prompt: None,
summary: None,
})
}
/// Execute follow-up with OpenCode filtering for stderr
async fn execute_followup_streaming(
&self,
pool: &sqlx::SqlitePool,
task_id: Uuid,
attempt_id: Uuid,
execution_process_id: Uuid,
session_id: &str,
prompt: &str,
worktree_path: &str,
) -> Result<CommandProcess, ExecutorError> {
let mut proc = self
.spawn_followup(pool, task_id, session_id, prompt, worktree_path)
.await?;
// Get stderr stream from CommandProcess for OpenCode filtering
let mut stream = proc
.stream()
.await
.expect("Failed to get streams from command process");
let stderr = stream
.stderr
.take()
.expect("Failed to get stderr from command stream");
// Start OpenCode stderr filtering task
let pool_clone = pool.clone();
let worktree_path_clone = worktree_path.to_string();
tokio::spawn(stream_opencode_stderr_to_db(
stderr,
pool_clone,
attempt_id,
execution_process_id,
worktree_path_clone,
));
Ok(proc)
}
async fn spawn_followup(
&self,
_pool: &sqlx::SqlitePool,
_task_id: Uuid,
session_id: &str,
prompt: &str,
worktree_path: &str,
) -> Result<CommandProcess, ExecutorError> {
// Use shell command for cross-platform compatibility
let (shell_cmd, shell_arg) = get_shell_command();
let opencode_command = format!("{} --session {}", self.command, session_id);
let mut command = CommandRunner::new();
command
.command(shell_cmd)
.arg(shell_arg)
.arg(&opencode_command)
.stdin(prompt)
.working_dir(worktree_path)
.env("NODE_NO_WARNINGS", "1");
let proc = command.start().await.map_err(|e| {
crate::executor::SpawnContext::from_command(&command, &self.executor_type)
.with_context(format!(
"{} CLI followup execution for session {}",
self.executor_type, session_id
))
.spawn_error(e)
})?;
Ok(proc)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::{
executor::ActionType,
executors::sst_opencode::{
format_opencode_content_as_normalized_json, SstOpencodeExecutor,
},
};
// Test the actual format that comes from the database (normalized JSON entries)
#[test]
fn test_normalize_logs_with_database_format() {
let executor = SstOpencodeExecutor::new();
// This is what the database should contain after our streaming function processes it
let logs = r#"{"timestamp":"2025-07-16T18:04:00Z","entry_type":{"type":"tool_use","tool_name":"read","action_type":{"action":"file_read","path":"hello.js"}},"content":"`hello.js`","metadata":{"filePath":"/path/to/repo/hello.js"}}
{"timestamp":"2025-07-16T18:04:01Z","entry_type":{"type":"assistant_message"},"content":"I'll read the hello.js file to see its current contents.","metadata":null}
{"timestamp":"2025-07-16T18:04:02Z","entry_type":{"type":"tool_use","tool_name":"bash","action_type":{"action":"command_run","command":"ls -la"}},"content":"`ls -la`","metadata":{"command":"ls -la"}}
{"timestamp":"2025-07-16T18:04:03Z","entry_type":{"type":"assistant_message"},"content":"The file exists and contains a hello world function.","metadata":null}"#;
let result = executor.normalize_logs(logs, "/path/to/repo").unwrap();
assert_eq!(result.entries.len(), 4);
// First entry: file read tool use
assert!(matches!(
result.entries[0].entry_type,
crate::executor::NormalizedEntryType::ToolUse { .. }
));
if let crate::executor::NormalizedEntryType::ToolUse {
tool_name,
action_type,
} = &result.entries[0].entry_type
{
assert_eq!(tool_name, "read");
assert!(matches!(action_type, ActionType::FileRead { .. }));
}
assert_eq!(result.entries[0].content, "`hello.js`");
assert!(result.entries[0].timestamp.is_some());
// Second entry: assistant message
assert!(matches!(
result.entries[1].entry_type,
crate::executor::NormalizedEntryType::AssistantMessage
));
assert!(result.entries[1].content.contains("read the hello.js file"));
// Third entry: bash tool use
assert!(matches!(
result.entries[2].entry_type,
crate::executor::NormalizedEntryType::ToolUse { .. }
));
if let crate::executor::NormalizedEntryType::ToolUse {
tool_name,
action_type,
} = &result.entries[2].entry_type
{
assert_eq!(tool_name, "bash");
assert!(matches!(action_type, ActionType::CommandRun { .. }));
}
// Fourth entry: assistant message
assert!(matches!(
result.entries[3].entry_type,
crate::executor::NormalizedEntryType::AssistantMessage
));
assert!(result.entries[3].content.contains("The file exists"));
}
#[test]
fn test_normalize_logs_with_session_id() {
let executor = SstOpencodeExecutor::new();
// Test session ID in JSON metadata - current implementation always returns None for session_id
let logs = r#"{"timestamp":"2025-07-16T18:04:00Z","entry_type":{"type":"assistant_message"},"content":"Session started","metadata":null,"session_id":"ses_abc123"}
{"timestamp":"2025-07-16T18:04:01Z","entry_type":{"type":"assistant_message"},"content":"Hello world","metadata":null}"#;
let result = executor.normalize_logs(logs, "/tmp").unwrap();
assert_eq!(result.session_id, None); // Session ID is stored directly in the database
assert_eq!(result.entries.len(), 2);
}
#[test]
fn test_normalize_logs_legacy_fallback() {
let executor = SstOpencodeExecutor::new();
// Current implementation doesn't handle legacy format - it only parses JSON entries
let logs = r#"INFO session=ses_legacy123 starting
| Read {"filePath":"/path/to/file.js"}
This is a plain assistant message"#;
let result = executor.normalize_logs(logs, "/tmp").unwrap();
// Session ID is always None in current implementation
assert_eq!(result.session_id, None);
// Current implementation skips non-JSON lines, so no entries will be parsed
assert_eq!(result.entries.len(), 0);
}
#[test]
fn test_format_opencode_content_as_normalized_json() {
let content = r#"| Read {"filePath":"/path/to/repo/hello.js"}
I'll read this file to understand its contents.
| bash {"command":"ls -la"}
The file listing shows several items."#;
let result = format_opencode_content_as_normalized_json(content, "/path/to/repo");
let lines: Vec<&str> = result
.split('\n')
.filter(|line| !line.trim().is_empty())
.collect();
// Should have 4 entries (2 tool uses + 2 assistant messages)
assert_eq!(lines.len(), 4);
// Parse all entries and verify unique timestamps
let mut timestamps = Vec::new();
for line in &lines {
let json: serde_json::Value = serde_json::from_str(line).unwrap();
let timestamp = json["timestamp"].as_str().unwrap().to_string();
timestamps.push(timestamp);
}
// Verify all timestamps are unique (no duplicates)
let mut unique_timestamps = timestamps.clone();
unique_timestamps.sort();
unique_timestamps.dedup();
assert_eq!(
timestamps.len(),
unique_timestamps.len(),
"All timestamps should be unique"
);
// Parse the first line (should be read tool use - normalized to lowercase)
let first_json: serde_json::Value = serde_json::from_str(lines[0]).unwrap();
assert_eq!(first_json["entry_type"]["type"], "tool_use");
assert_eq!(first_json["entry_type"]["tool_name"], "read");
assert_eq!(first_json["content"], "`hello.js`");
// Parse the second line (should be assistant message)
let second_json: serde_json::Value = serde_json::from_str(lines[1]).unwrap();
assert_eq!(second_json["entry_type"]["type"], "assistant_message");
assert!(second_json["content"]
.as_str()
.unwrap()
.contains("read this file"));
// Parse the third line (should be bash tool use)
let third_json: serde_json::Value = serde_json::from_str(lines[2]).unwrap();
assert_eq!(third_json["entry_type"]["type"], "tool_use");
assert_eq!(third_json["entry_type"]["tool_name"], "bash");
assert_eq!(third_json["content"], "`ls -la`");
// Verify timestamps include microseconds for uniqueness
for timestamp in timestamps {
assert!(
timestamp.contains('.'),
"Timestamp should include microseconds: {}",
timestamp
);
}
}
#[test]
fn test_format_opencode_content_todo_operations() {
let content = r#"| TodoWrite {"todos":[{"id":"1","content":"Fix bug","status":"completed","priority":"high"},{"id":"2","content":"Add feature","status":"in_progress","priority":"medium"}]}"#;
let result = format_opencode_content_as_normalized_json(content, "/tmp");
let json: serde_json::Value = serde_json::from_str(&result).unwrap();
assert_eq!(json["entry_type"]["type"], "tool_use");
assert_eq!(json["entry_type"]["tool_name"], "todowrite"); // Normalized from "TodoWrite"
assert_eq!(json["entry_type"]["action_type"]["action"], "other"); // Changed from task_create to other
// Should contain formatted todo list
let content_str = json["content"].as_str().unwrap();
assert!(content_str.contains("TODO List:"));
assert!(content_str.contains("✅ Fix bug (high)"));
assert!(content_str.contains("🔄 Add feature (medium)"));
}
#[test]
fn test_format_opencode_content_todo_tool() {
// Test the "Todo" tool (case-sensitive, different from todowrite/todoread)
let content = r#"| Todo {"todos":[{"id":"1","content":"Review code","status":"pending","priority":"high"},{"id":"2","content":"Write tests","status":"in_progress","priority":"low"}]}"#;
let result = format_opencode_content_as_normalized_json(content, "/tmp");
let json: serde_json::Value = serde_json::from_str(&result).unwrap();
assert_eq!(json["entry_type"]["type"], "tool_use");
assert_eq!(json["entry_type"]["tool_name"], "todowrite"); // Normalized from "Todo"
assert_eq!(json["entry_type"]["action_type"]["action"], "other"); // Changed from task_create to other
// Should contain formatted todo list with proper emojis
let content_str = json["content"].as_str().unwrap();
assert!(content_str.contains("TODO List:"));
assert!(content_str.contains("⏳ Review code (high)"));
assert!(content_str.contains("🔄 Write tests (low)"));
}
#[test]
fn test_opencode_filter_noise_detection() {
use crate::executors::sst_opencode::filter::OpenCodeFilter;
// Test noise detection
assert!(OpenCodeFilter::is_noise(""));
assert!(OpenCodeFilter::is_noise(" "));
assert!(OpenCodeFilter::is_noise("█▀▀█ █▀▀█ Banner"));
assert!(OpenCodeFilter::is_noise("@ anthropic/claude-sonnet-4"));
assert!(OpenCodeFilter::is_noise("~ https://opencode.ai/s/abc123"));
assert!(OpenCodeFilter::is_noise("DEBUG some debug info"));
assert!(OpenCodeFilter::is_noise("INFO session info"));
assert!(OpenCodeFilter::is_noise("┌─────────────────┐"));
// Test clean content detection (not noise)
assert!(!OpenCodeFilter::is_noise("| Read {\"file\":\"test.js\"}"));
assert!(!OpenCodeFilter::is_noise("Assistant response text"));
assert!(!OpenCodeFilter::is_noise("{\"type\":\"content\"}"));
assert!(!OpenCodeFilter::is_noise("session=abc123 started"));
assert!(!OpenCodeFilter::is_noise("Normal conversation text"));
}
#[test]
fn test_normalize_logs_edge_cases() {
let executor = SstOpencodeExecutor::new();
// Empty content
let result = executor.normalize_logs("", "/tmp").unwrap();
assert_eq!(result.entries.len(), 0);
// Only whitespace
let result = executor.normalize_logs(" \n\t\n ", "/tmp").unwrap();
assert_eq!(result.entries.len(), 0);
// Malformed JSON (current implementation skips invalid JSON)
let malformed = r#"{"timestamp":"2025-01-16T18:04:00Z","content":"incomplete"#;
let result = executor.normalize_logs(malformed, "/tmp").unwrap();
assert_eq!(result.entries.len(), 0); // Current implementation skips invalid JSON
// Mixed valid and invalid JSON
let mixed = r#"{"timestamp":"2025-01-16T18:04:00Z","entry_type":{"type":"assistant_message"},"content":"Valid entry","metadata":null}
Invalid line that's not JSON
{"timestamp":"2025-01-16T18:04:01Z","entry_type":{"type":"assistant_message"},"content":"Another valid entry","metadata":null}"#;
let result = executor.normalize_logs(mixed, "/tmp").unwrap();
assert_eq!(result.entries.len(), 2); // Only valid JSON entries are parsed
}
#[test]
fn test_ansi_code_stripping() {
use crate::executors::sst_opencode::filter::OpenCodeFilter;
// Test ANSI escape sequence removal
let ansi_text = "\x1b[31mRed text\x1b[0m normal text";
let cleaned = OpenCodeFilter::strip_ansi_codes(ansi_text);
assert_eq!(cleaned, "Red text normal text");
// Test unicode escape sequences
let unicode_ansi = "Text with \\u001b[32mgreen\\u001b[0m color";
let cleaned = OpenCodeFilter::strip_ansi_codes(unicode_ansi);
assert_eq!(cleaned, "Text with green color");
// Test text without ANSI codes (unchanged)
let plain_text = "Regular text without codes";
let cleaned = OpenCodeFilter::strip_ansi_codes(plain_text);
assert_eq!(cleaned, plain_text);
}
}

View File

@@ -1,184 +0,0 @@
use lazy_static::lazy_static;
use regex::Regex;
lazy_static! {
static ref OPENCODE_LOG_REGEX: Regex = Regex::new(r"^(INFO|DEBUG|WARN|ERROR)\s+.*").unwrap();
static ref SESSION_ID_REGEX: Regex = Regex::new(r".*\b(id|session|sessionID)=([^ ]+)").unwrap();
static ref TOOL_USAGE_REGEX: Regex = Regex::new(r"^\|\s*([a-zA-Z]+)\s*(.*)").unwrap();
static ref NPM_WARN_REGEX: Regex = Regex::new(r"^npm warn .*").unwrap();
}
/// Filter for OpenCode stderr output
pub struct OpenCodeFilter;
impl OpenCodeFilter {
/// Check if a line should be skipped as noise
pub fn is_noise(line: &str) -> bool {
let trimmed = line.trim();
// Empty lines are noise
if trimmed.is_empty() {
return true;
}
// Strip ANSI escape codes for analysis
let cleaned = Self::strip_ansi_codes(trimmed);
let cleaned_trim = cleaned.trim();
// Skip tool calls - they are NOT noise
if TOOL_USAGE_REGEX.is_match(cleaned_trim) {
return false;
}
// OpenCode log lines are noise (includes session logs)
if is_opencode_log_line(cleaned_trim) {
return true;
}
if NPM_WARN_REGEX.is_match(cleaned_trim) {
return true;
}
// Spinner glyphs
if cleaned_trim.len() == 1 && "⠋⠙⠹⠸⠼⠴⠦⠧⠇⠏".contains(cleaned_trim) {
return true;
}
// Banner lines containing block glyphs (Unicode Block Elements range)
if cleaned_trim
.chars()
.any(|c| ('\u{2580}'..='\u{259F}').contains(&c))
{
return true;
}
// UI/stats frames using Box Drawing glyphs (U+2500-257F)
if cleaned_trim
.chars()
.any(|c| ('\u{2500}'..='\u{257F}').contains(&c))
{
return true;
}
// Model banner (@ with spaces)
if cleaned_trim.starts_with("@ ") {
return true;
}
// Share link
if cleaned_trim.starts_with("~") && cleaned_trim.contains("https://opencode.ai/s/") {
return true;
}
// Everything else (assistant messages) is NOT noise
false
}
pub fn is_stderr(_line: &str) -> bool {
false
}
/// Strip ANSI escape codes from text (conservative)
pub fn strip_ansi_codes(text: &str) -> String {
// Handle both unicode escape sequences and raw ANSI codes
let result = text.replace("\\u001b", "\x1b");
let mut cleaned = String::new();
let mut chars = result.chars().peekable();
while let Some(ch) = chars.next() {
if ch == '\x1b' {
// Skip ANSI escape sequence
if chars.peek() == Some(&'[') {
chars.next(); // consume '['
// Skip until we find a letter (end of ANSI sequence)
for next_ch in chars.by_ref() {
if next_ch.is_ascii_alphabetic() {
break;
}
}
}
} else {
cleaned.push(ch);
}
}
cleaned
}
}
/// Detect if a line is an OpenCode log line format using regex
pub fn is_opencode_log_line(line: &str) -> bool {
OPENCODE_LOG_REGEX.is_match(line)
}
/// Parse session_id from OpenCode log lines
pub fn parse_session_id_from_line(line: &str) -> Option<String> {
// Only apply to OpenCode log lines
if !is_opencode_log_line(line) {
return None;
}
// Try regex for session ID extraction from service=session logs
if let Some(captures) = SESSION_ID_REGEX.captures(line) {
if let Some(id) = captures.get(2) {
return Some(id.as_str().to_string());
}
}
None
}
/// Get the tool usage regex for parsing tool patterns
pub fn tool_usage_regex() -> &'static Regex {
&TOOL_USAGE_REGEX
}
#[cfg(test)]
mod tests {
#[test]
fn test_session_id_extraction() {
use crate::executors::sst_opencode::filter::parse_session_id_from_line;
// Test session ID extraction from session= format (only works on OpenCode log lines)
assert_eq!(
parse_session_id_from_line("INFO session=ses_abc123 starting"),
Some("ses_abc123".to_string())
);
assert_eq!(
parse_session_id_from_line("DEBUG id=debug_id process"),
Some("debug_id".to_string())
);
// Test lines without log prefix (should return None)
assert_eq!(
parse_session_id_from_line("session=simple_id chatting"),
None
);
// Test no session ID
assert_eq!(parse_session_id_from_line("No session here"), None);
assert_eq!(parse_session_id_from_line(""), None);
}
#[test]
fn test_ansi_code_stripping() {
use crate::executors::sst_opencode::filter::OpenCodeFilter;
// Test ANSI escape sequence removal
let ansi_text = "\x1b[31mRed text\x1b[0m normal text";
let cleaned = OpenCodeFilter::strip_ansi_codes(ansi_text);
assert_eq!(cleaned, "Red text normal text");
// Test unicode escape sequences
let unicode_ansi = "Text with \\u001b[32mgreen\\u001b[0m color";
let cleaned = OpenCodeFilter::strip_ansi_codes(unicode_ansi);
assert_eq!(cleaned, "Text with green color");
// Test text without ANSI codes (unchanged)
let plain_text = "Regular text without codes";
let cleaned = OpenCodeFilter::strip_ansi_codes(plain_text);
assert_eq!(cleaned, plain_text);
}
}

View File

@@ -1,166 +0,0 @@
use serde_json::{json, Value};
use crate::utils::path::make_path_relative;
/// Normalize tool names to match frontend expectations for purple box styling
pub fn normalize_tool_name(tool_name: &str) -> String {
match tool_name {
"Todo" => "todowrite".to_string(), // Generic TODO tool → todowrite
"TodoWrite" => "todowrite".to_string(),
"TodoRead" => "todoread".to_string(),
"ExitPlanMode" => "exitplanmode".to_string(), // Normalize ExitPlanMode to lowercase
_ => tool_name.to_lowercase(), // Convert all tool names to lowercase for consistency
}
}
/// Helper function to determine action type for tool usage
pub fn determine_action_type(tool_name: &str, input: &Value, worktree_path: &str) -> Value {
match tool_name.to_lowercase().as_str() {
"read" => {
if let Some(file_path) = input.get("filePath").and_then(|p| p.as_str()) {
json!({
"action": "file_read",
"path": make_path_relative(file_path, worktree_path)
})
} else {
json!({"action": "other", "description": "File read operation"})
}
}
"write" | "edit" => {
if let Some(file_path) = input.get("filePath").and_then(|p| p.as_str()) {
json!({
"action": "file_write",
"path": make_path_relative(file_path, worktree_path)
})
} else {
json!({"action": "other", "description": "File write operation"})
}
}
"bash" => {
if let Some(command) = input.get("command").and_then(|c| c.as_str()) {
json!({"action": "command_run", "command": command})
} else {
json!({"action": "other", "description": "Command execution"})
}
}
"grep" => {
if let Some(pattern) = input.get("pattern").and_then(|p| p.as_str()) {
json!({"action": "search", "query": pattern})
} else {
json!({"action": "other", "description": "Search operation"})
}
}
"todowrite" | "todoread" => {
json!({"action": "other", "description": "TODO list management"})
}
"exitplanmode" => {
// Extract the plan from the input
let plan_content = if let Some(plan) = input.get("plan").and_then(|p| p.as_str()) {
plan.to_string()
} else {
// Fallback - use the full input as plan if no specific plan field
serde_json::to_string_pretty(input).unwrap_or_default()
};
json!({
"action": "plan_presentation",
"plan": plan_content
})
}
_ => json!({"action": "other", "description": format!("Tool: {}", tool_name)}),
}
}
/// Helper function to generate concise content for tool usage
pub fn generate_tool_content(tool_name: &str, input: &Value, worktree_path: &str) -> String {
match tool_name.to_lowercase().as_str() {
"read" => {
if let Some(file_path) = input.get("filePath").and_then(|p| p.as_str()) {
format!("`{}`", make_path_relative(file_path, worktree_path))
} else {
"Read file".to_string()
}
}
"write" | "edit" => {
if let Some(file_path) = input.get("filePath").and_then(|p| p.as_str()) {
format!("`{}`", make_path_relative(file_path, worktree_path))
} else {
"Write file".to_string()
}
}
"bash" => {
if let Some(command) = input.get("command").and_then(|c| c.as_str()) {
format!("`{}`", command)
} else {
"Execute command".to_string()
}
}
"todowrite" | "todoread" => generate_todo_content(input),
"exitplanmode" => {
// Show the plan content or a summary
if let Some(plan) = input.get("plan").and_then(|p| p.as_str()) {
// Truncate long plans for display
if plan.len() > 100 {
format!("{}...", &plan[..97])
} else {
plan.to_string()
}
} else {
"Plan presentation".to_string()
}
}
_ => format!("`{}`", tool_name),
}
}
/// Generate formatted content for TODO tools
fn generate_todo_content(input: &Value) -> String {
// Extract todo list from input to show actual todos
if let Some(todos) = input.get("todos").and_then(|t| t.as_array()) {
let mut todo_items = Vec::new();
for todo in todos {
if let Some(content) = todo.get("content").and_then(|c| c.as_str()) {
let status = todo
.get("status")
.and_then(|s| s.as_str())
.unwrap_or("pending");
let status_emoji = match status {
"completed" => "",
"in_progress" => "🔄",
"pending" | "todo" => "",
_ => "📝",
};
let priority = todo
.get("priority")
.and_then(|p| p.as_str())
.unwrap_or("medium");
todo_items.push(format!("{} {} ({})", status_emoji, content, priority));
}
}
if !todo_items.is_empty() {
format!("TODO List:\n{}", todo_items.join("\n"))
} else {
"Managing TODO list".to_string()
}
} else {
"Managing TODO list".to_string()
}
}
#[cfg(test)]
mod tests {
#[test]
fn test_normalize_tool_name() {
use crate::executors::sst_opencode::tools::normalize_tool_name;
// Test TODO tool normalization
assert_eq!(normalize_tool_name("Todo"), "todowrite");
assert_eq!(normalize_tool_name("TodoWrite"), "todowrite");
assert_eq!(normalize_tool_name("TodoRead"), "todoread");
// Test other tools are converted to lowercase
assert_eq!(normalize_tool_name("Read"), "read");
assert_eq!(normalize_tool_name("Write"), "write");
assert_eq!(normalize_tool_name("bash"), "bash");
assert_eq!(normalize_tool_name("SomeOtherTool"), "someothertool");
}
}

View File

@@ -1,317 +0,0 @@
use std::{str::FromStr, sync::Arc};
use axum::{
body::Body,
http::{header, HeaderValue, StatusCode},
middleware::from_fn_with_state,
response::{IntoResponse, Json as ResponseJson, Response},
routing::{get, post},
Json, Router,
};
use sentry_tower::NewSentryLayer;
use sqlx::{sqlite::SqliteConnectOptions, SqlitePool};
use strip_ansi_escapes::strip;
use tokio::sync::RwLock;
use tower_http::cors::CorsLayer;
use tracing_subscriber::{filter::LevelFilter, prelude::*};
use vibe_kanban::{sentry_layer, Assets, ScriptAssets, SoundAssets};
mod app_state;
mod command_runner;
mod execution_monitor;
mod executor;
mod executors;
mod mcp;
mod middleware;
mod models;
mod routes;
mod services;
mod utils;
use app_state::AppState;
use execution_monitor::execution_monitor;
use middleware::{
load_execution_process_simple_middleware, load_project_middleware,
load_task_attempt_middleware, load_task_middleware, load_task_template_middleware,
};
use models::{ApiResponse, Config, Environment};
use routes::{
auth, config, filesystem, github, health, projects, stream, task_attempts, task_templates,
tasks,
};
use services::PrMonitorService;
async fn echo_handler(
Json(payload): Json<serde_json::Value>,
) -> ResponseJson<ApiResponse<serde_json::Value>> {
ResponseJson(ApiResponse::success(payload))
}
async fn static_handler(uri: axum::extract::Path<String>) -> impl IntoResponse {
let path = uri.trim_start_matches('/');
serve_file(path).await
}
async fn index_handler() -> impl IntoResponse {
serve_file("index.html").await
}
async fn serve_file(path: &str) -> impl IntoResponse {
let file = Assets::get(path);
match file {
Some(content) => {
let mime = mime_guess::from_path(path).first_or_octet_stream();
Response::builder()
.status(StatusCode::OK)
.header(
header::CONTENT_TYPE,
HeaderValue::from_str(mime.as_ref()).unwrap(),
)
.body(Body::from(content.data.into_owned()))
.unwrap()
}
None => {
// For SPA routing, serve index.html for unknown routes
if let Some(index) = Assets::get("index.html") {
Response::builder()
.status(StatusCode::OK)
.header(header::CONTENT_TYPE, HeaderValue::from_static("text/html"))
.body(Body::from(index.data.into_owned()))
.unwrap()
} else {
Response::builder()
.status(StatusCode::NOT_FOUND)
.body(Body::from("404 Not Found"))
.unwrap()
}
}
}
}
async fn serve_sound_file(
axum::extract::Path(filename): axum::extract::Path<String>,
) -> impl IntoResponse {
// Validate filename contains only expected sound files
let valid_sounds = [
"abstract-sound1.wav",
"abstract-sound2.wav",
"abstract-sound3.wav",
"abstract-sound4.wav",
"cow-mooing.wav",
"phone-vibration.wav",
"rooster.wav",
];
if !valid_sounds.contains(&filename.as_str()) {
return Response::builder()
.status(StatusCode::NOT_FOUND)
.body(Body::from("Sound file not found"))
.unwrap();
}
match SoundAssets::get(&filename) {
Some(content) => Response::builder()
.status(StatusCode::OK)
.header(header::CONTENT_TYPE, HeaderValue::from_static("audio/wav"))
.body(Body::from(content.data.into_owned()))
.unwrap(),
None => Response::builder()
.status(StatusCode::NOT_FOUND)
.body(Body::from("Sound file not found"))
.unwrap(),
}
}
fn main() -> anyhow::Result<()> {
let environment = if cfg!(debug_assertions) {
"dev"
} else {
"production"
};
let _guard = sentry::init(("https://1065a1d276a581316999a07d5dffee26@o4509603705192449.ingest.de.sentry.io/4509605576441937", sentry::ClientOptions {
release: sentry::release_name!(),
environment: Some(environment.into()),
attach_stacktrace: true,
..Default::default()
}));
sentry::configure_scope(|scope| {
scope.set_tag("source", "server");
});
tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.unwrap()
.block_on(async {
tracing_subscriber::registry()
.with(tracing_subscriber::fmt::layer().with_filter(LevelFilter::INFO))
.with(sentry_layer())
.init();
// Create asset directory if it doesn't exist
if !utils::asset_dir().exists() {
std::fs::create_dir_all(utils::asset_dir())?;
}
// Database connection
let database_url = format!(
"sqlite://{}",
utils::asset_dir().join("db.sqlite").to_string_lossy()
);
let options = SqliteConnectOptions::from_str(&database_url)?.create_if_missing(true);
let pool = SqlitePool::connect_with(options).await?;
sqlx::migrate!("./migrations").run(&pool).await?;
// Load configuration
let config_path = utils::config_path();
let config = Config::load(&config_path)?;
let config_arc = Arc::new(RwLock::new(config));
let env = std::env::var("ENVIRONMENT")
.unwrap_or_else(|_| "local".to_string());
let mode = env.parse().unwrap_or(Environment::Local);
tracing::info!("Running in {mode} mode" );
// Create app state
let app_state = AppState::new(pool.clone(), config_arc.clone(), mode).await;
app_state.update_sentry_scope().await;
// Track session start event
app_state.track_analytics_event("session_start", None).await;
// Start background task to check for init status and spawn processes
let state_clone = app_state.clone();
tokio::spawn(async move {
execution_monitor(state_clone).await;
});
// Start PR monitoring service
let pr_monitor = PrMonitorService::new(pool.clone());
let config_for_monitor = config_arc.clone();
tokio::spawn(async move {
pr_monitor.start_with_config(config_for_monitor).await;
});
// Public routes (no auth required)
let public_routes = Router::new()
.route("/api/health", get(health::health_check))
.route("/api/echo", post(echo_handler));
// Create routers with different middleware layers
let base_routes = Router::new()
.merge(stream::stream_router())
.merge(filesystem::filesystem_router())
.merge(config::config_router())
.merge(auth::auth_router())
.route("/sounds/:filename", get(serve_sound_file))
.merge(
Router::new()
.route("/execution-processes/:process_id", get(task_attempts::get_execution_process))
.route_layer(from_fn_with_state(app_state.clone(), load_execution_process_simple_middleware))
);
// Template routes with task template middleware applied selectively
let template_routes = Router::new()
.route("/templates", get(task_templates::list_templates).post(task_templates::create_template))
.route("/templates/global", get(task_templates::list_global_templates))
.route(
"/projects/:project_id/templates",
get(task_templates::list_project_templates),
)
.merge(
Router::new()
.route(
"/templates/:template_id",
get(task_templates::get_template)
.put(task_templates::update_template)
.delete(task_templates::delete_template),
)
.route_layer(from_fn_with_state(app_state.clone(), load_task_template_middleware))
);
// Project routes with project middleware
let project_routes = Router::new()
.merge(projects::projects_base_router())
.merge(projects::projects_with_id_router()
.layer(from_fn_with_state(app_state.clone(), load_project_middleware)));
// Task routes with appropriate middleware
let task_routes = Router::new()
.merge(tasks::tasks_project_router()
.layer(from_fn_with_state(app_state.clone(), load_project_middleware)))
.merge(tasks::tasks_with_id_router()
.layer(from_fn_with_state(app_state.clone(), load_task_middleware)));
// Task attempt routes with appropriate middleware
let task_attempt_routes = Router::new()
.merge(task_attempts::task_attempts_list_router(app_state.clone())
.layer(from_fn_with_state(app_state.clone(), load_task_middleware)))
.merge(task_attempts::task_attempts_with_id_router(app_state.clone())
.layer(from_fn_with_state(app_state.clone(), load_task_attempt_middleware)));
// Conditionally add GitHub routes for cloud mode
let mut api_routes = Router::new()
.merge(base_routes)
.merge(template_routes)
.merge(project_routes)
.merge(task_routes)
.merge(task_attempt_routes);
if mode.is_cloud() {
api_routes = api_routes.merge(github::github_router());
tracing::info!("GitHub repository routes enabled (cloud mode)");
}
// All routes (no auth required)
let app_routes = Router::new()
.nest(
"/api",
api_routes
.layer(from_fn_with_state(app_state.clone(), auth::sentry_user_context_middleware)),
);
let app = Router::new()
.merge(public_routes)
.merge(app_routes)
// Static file serving routes
.route("/", get(index_handler))
.route("/*path", get(static_handler))
.with_state(app_state)
.layer(CorsLayer::permissive())
.layer(NewSentryLayer::new_from_top());
let port = std::env::var("BACKEND_PORT")
.or_else(|_| std::env::var("PORT"))
.ok()
.and_then(|s| {
// remove any ANSI codes, then turn into String
let cleaned = String::from_utf8(strip(s.as_bytes()))
.expect("UTF-8 after stripping ANSI");
cleaned.trim().parse::<u16>().ok()
})
.unwrap_or_else(|| {
tracing::info!("No PORT environment variable set, using port 0 for auto-assignment");
0
}); // Use 0 to find free port if no specific port provided
let host = std::env::var("HOST").unwrap_or_else(|_| "127.0.0.1".to_string());
let listener = tokio::net::TcpListener::bind(format!("{host}:{port}")).await?;
let actual_port = listener.local_addr()?.port(); // get → 53427 (example)
tracing::info!("Server running on http://{host}:{actual_port}");
if !cfg!(debug_assertions) {
tracing::info!("Opening browser...");
if let Err(e) = utils::open_browser(&format!("http://127.0.0.1:{actual_port}")).await {
tracing::warn!("Failed to open browser automatically: {}. Please open http://127.0.0.1:{} manually.", e, actual_port);
}
}
axum::serve(listener, app).await?;
Ok(())
})
}

View File

@@ -1,242 +0,0 @@
use axum::{
extract::{Path, State},
http::StatusCode,
middleware::Next,
response::Response,
};
use uuid::Uuid;
use crate::{
app_state::AppState,
models::{
execution_process::ExecutionProcess, project::Project, task::Task,
task_attempt::TaskAttempt, task_template::TaskTemplate,
},
};
/// Middleware that loads and injects a Project based on the project_id path parameter
pub async fn load_project_middleware(
State(app_state): State<AppState>,
Path(project_id): Path<Uuid>,
request: axum::extract::Request,
next: Next,
) -> Result<Response, StatusCode> {
// Load the project from the database
let project = match Project::find_by_id(&app_state.db_pool, project_id).await {
Ok(Some(project)) => project,
Ok(None) => {
tracing::warn!("Project {} not found", project_id);
return Err(StatusCode::NOT_FOUND);
}
Err(e) => {
tracing::error!("Failed to fetch project {}: {}", project_id, e);
return Err(StatusCode::INTERNAL_SERVER_ERROR);
}
};
// Insert the project as an extension
let mut request = request;
request.extensions_mut().insert(project);
// Continue with the next middleware/handler
Ok(next.run(request).await)
}
/// Middleware that loads and injects both Project and Task based on project_id and task_id path parameters
pub async fn load_task_middleware(
State(app_state): State<AppState>,
Path((project_id, task_id)): Path<(Uuid, Uuid)>,
request: axum::extract::Request,
next: Next,
) -> Result<Response, StatusCode> {
// Load the project first
let project = match Project::find_by_id(&app_state.db_pool, project_id).await {
Ok(Some(project)) => project,
Ok(None) => {
tracing::warn!("Project {} not found", project_id);
return Err(StatusCode::NOT_FOUND);
}
Err(e) => {
tracing::error!("Failed to fetch project {}: {}", project_id, e);
return Err(StatusCode::INTERNAL_SERVER_ERROR);
}
};
// Load the task and validate it belongs to the project
let task = match Task::find_by_id_and_project_id(&app_state.db_pool, task_id, project_id).await
{
Ok(Some(task)) => task,
Ok(None) => {
tracing::warn!("Task {} not found in project {}", task_id, project_id);
return Err(StatusCode::NOT_FOUND);
}
Err(e) => {
tracing::error!(
"Failed to fetch task {} in project {}: {}",
task_id,
project_id,
e
);
return Err(StatusCode::INTERNAL_SERVER_ERROR);
}
};
// Insert both models as extensions
let mut request = request;
request.extensions_mut().insert(project);
request.extensions_mut().insert(task);
// Continue with the next middleware/handler
Ok(next.run(request).await)
}
/// Middleware that loads and injects Project, Task, and TaskAttempt based on project_id, task_id, and attempt_id path parameters
pub async fn load_task_attempt_middleware(
State(app_state): State<AppState>,
Path((project_id, task_id, attempt_id)): Path<(Uuid, Uuid, Uuid)>,
request: axum::extract::Request,
next: Next,
) -> Result<Response, StatusCode> {
// Load the full context in one call using the existing method
let context = match TaskAttempt::load_context(
&app_state.db_pool,
attempt_id,
task_id,
project_id,
)
.await
{
Ok(context) => context,
Err(e) => {
tracing::error!(
"Failed to load context for attempt {} in task {} in project {}: {}",
attempt_id,
task_id,
project_id,
e
);
return Err(StatusCode::NOT_FOUND);
}
};
// Insert all models as extensions
let mut request = request;
request.extensions_mut().insert(context.project);
request.extensions_mut().insert(context.task);
request.extensions_mut().insert(context.task_attempt);
// Continue with the next middleware/handler
Ok(next.run(request).await)
}
/// Simple middleware that loads and injects ExecutionProcess based on the process_id path parameter
/// without any additional validation
pub async fn load_execution_process_simple_middleware(
State(app_state): State<AppState>,
Path(process_id): Path<Uuid>,
mut request: axum::extract::Request,
next: Next,
) -> Result<Response, StatusCode> {
// Load the execution process from the database
let execution_process = match ExecutionProcess::find_by_id(&app_state.db_pool, process_id).await
{
Ok(Some(process)) => process,
Ok(None) => {
tracing::warn!("ExecutionProcess {} not found", process_id);
return Err(StatusCode::NOT_FOUND);
}
Err(e) => {
tracing::error!("Failed to fetch execution process {}: {}", process_id, e);
return Err(StatusCode::INTERNAL_SERVER_ERROR);
}
};
// Inject the execution process into the request
request.extensions_mut().insert(execution_process);
// Continue to the next middleware/handler
Ok(next.run(request).await)
}
/// Middleware that loads and injects Project, Task, TaskAttempt, and ExecutionProcess
/// based on the path parameters: project_id, task_id, attempt_id, process_id
pub async fn load_execution_process_with_context_middleware(
State(app_state): State<AppState>,
Path((project_id, task_id, attempt_id, process_id)): Path<(Uuid, Uuid, Uuid, Uuid)>,
request: axum::extract::Request,
next: Next,
) -> Result<Response, StatusCode> {
// Load the task attempt context first
let context = match TaskAttempt::load_context(
&app_state.db_pool,
attempt_id,
task_id,
project_id,
)
.await
{
Ok(context) => context,
Err(e) => {
tracing::error!(
"Failed to load context for attempt {} in task {} in project {}: {}",
attempt_id,
task_id,
project_id,
e
);
return Err(StatusCode::NOT_FOUND);
}
};
// Load the execution process
let execution_process = match ExecutionProcess::find_by_id(&app_state.db_pool, process_id).await
{
Ok(Some(process)) => process,
Ok(None) => {
tracing::warn!("ExecutionProcess {} not found", process_id);
return Err(StatusCode::NOT_FOUND);
}
Err(e) => {
tracing::error!("Failed to fetch execution process {}: {}", process_id, e);
return Err(StatusCode::INTERNAL_SERVER_ERROR);
}
};
// Insert all models as extensions
let mut request = request;
request.extensions_mut().insert(context.project);
request.extensions_mut().insert(context.task);
request.extensions_mut().insert(context.task_attempt);
request.extensions_mut().insert(execution_process);
// Continue with the next middleware/handler
Ok(next.run(request).await)
}
/// Middleware that loads and injects TaskTemplate based on the template_id path parameter
pub async fn load_task_template_middleware(
State(app_state): State<AppState>,
Path(template_id): Path<Uuid>,
request: axum::extract::Request,
next: Next,
) -> Result<Response, StatusCode> {
// Load the task template from the database
let task_template = match TaskTemplate::find_by_id(&app_state.db_pool, template_id).await {
Ok(Some(template)) => template,
Ok(None) => {
tracing::warn!("TaskTemplate {} not found", template_id);
return Err(StatusCode::NOT_FOUND);
}
Err(e) => {
tracing::error!("Failed to fetch task template {}: {}", template_id, e);
return Err(StatusCode::INTERNAL_SERVER_ERROR);
}
};
// Insert the task template as an extension
let mut request = request;
request.extensions_mut().insert(task_template);
// Continue with the next middleware/handler
Ok(next.run(request).await)
}

View File

@@ -1,35 +0,0 @@
mod response {
use serde::Serialize;
use ts_rs::TS;
#[derive(Debug, Serialize, TS)]
#[ts(export)]
pub struct ApiResponse<T> {
success: bool,
data: Option<T>,
message: Option<String>,
}
impl<T> ApiResponse<T> {
/// Creates a successful response, with `data` and no message.
pub fn success(data: T) -> Self {
ApiResponse {
success: true,
data: Some(data),
message: None,
}
}
/// Creates an error response, with `message` and no data.
pub fn error(message: &str) -> Self {
ApiResponse {
success: false,
data: None,
message: Some(message.to_string()),
}
}
}
}
// Re-export the type, but its fields remain private
pub use response::ApiResponse;

View File

@@ -1,433 +0,0 @@
use std::{path::PathBuf, str::FromStr};
use serde::{Deserialize, Serialize};
use ts_rs::TS;
use crate::executor::ExecutorConfig;
#[derive(Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, TS)]
#[ts(export)]
#[serde(rename_all = "lowercase")]
pub enum Environment {
Local,
Cloud,
}
impl FromStr for Environment {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
match s.to_lowercase().as_str() {
"local" => Ok(Environment::Local),
"cloud" => Ok(Environment::Cloud),
_ => Err(format!("Invalid environment: {}", s)),
}
}
}
impl Environment {
pub fn is_cloud(&self) -> bool {
matches!(self, Environment::Cloud)
}
pub fn is_local(&self) -> bool {
matches!(self, Environment::Local)
}
}
impl std::fmt::Display for Environment {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Environment::Local => write!(f, "local"),
Environment::Cloud => write!(f, "cloud"),
}
}
}
#[derive(Debug, Clone, Serialize, Deserialize, TS)]
#[ts(export)]
pub struct EnvironmentInfo {
pub os_type: String,
pub os_version: String,
pub architecture: String,
pub bitness: String,
}
#[derive(Debug, Clone, Serialize, Deserialize, TS)]
#[ts(export)]
pub struct Config {
pub theme: ThemeMode,
pub executor: ExecutorConfig,
pub disclaimer_acknowledged: bool,
pub onboarding_acknowledged: bool,
pub github_login_acknowledged: bool,
pub telemetry_acknowledged: bool,
pub sound_alerts: bool,
pub sound_file: SoundFile,
pub push_notifications: bool,
pub editor: EditorConfig,
pub github: GitHubConfig,
pub analytics_enabled: Option<bool>,
pub environment: EnvironmentInfo,
pub workspace_dir: Option<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize, TS)]
#[ts(export)]
#[serde(rename_all = "lowercase")]
pub enum ThemeMode {
Light,
Dark,
System,
Purple,
Green,
Blue,
Orange,
Red,
}
#[derive(Debug, Clone, Serialize, Deserialize, TS)]
#[ts(export)]
pub struct EditorConfig {
pub editor_type: EditorType,
pub custom_command: Option<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize, TS)]
#[ts(export)]
pub struct GitHubConfig {
pub pat: Option<String>,
pub token: Option<String>,
pub username: Option<String>,
pub primary_email: Option<String>,
pub default_pr_base: Option<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize, TS)]
#[ts(export)]
#[serde(rename_all = "lowercase")]
pub enum EditorType {
VSCode,
Cursor,
Windsurf,
IntelliJ,
Zed,
Custom,
}
#[derive(Debug, Clone, Serialize, Deserialize, TS)]
#[ts(export)]
#[serde(rename_all = "kebab-case")]
pub enum SoundFile {
AbstractSound1,
AbstractSound2,
AbstractSound3,
AbstractSound4,
CowMooing,
PhoneVibration,
Rooster,
}
// Constants for frontend
#[derive(Debug, Clone, Serialize, Deserialize, TS)]
#[ts(export)]
pub struct EditorConstants {
pub editor_types: Vec<EditorType>,
pub editor_labels: Vec<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize, TS)]
#[ts(export)]
pub struct SoundConstants {
pub sound_files: Vec<SoundFile>,
pub sound_labels: Vec<String>,
}
impl EditorConstants {
pub fn new() -> Self {
Self {
editor_types: vec![
EditorType::VSCode,
EditorType::Cursor,
EditorType::Windsurf,
EditorType::IntelliJ,
EditorType::Zed,
EditorType::Custom,
],
editor_labels: vec![
"VS Code".to_string(),
"Cursor".to_string(),
"Windsurf".to_string(),
"IntelliJ IDEA".to_string(),
"Zed".to_string(),
"Custom".to_string(),
],
}
}
}
impl Default for EditorConstants {
fn default() -> Self {
Self::new()
}
}
impl SoundConstants {
pub fn new() -> Self {
Self {
sound_files: vec![
SoundFile::AbstractSound1,
SoundFile::AbstractSound2,
SoundFile::AbstractSound3,
SoundFile::AbstractSound4,
SoundFile::CowMooing,
SoundFile::PhoneVibration,
SoundFile::Rooster,
],
sound_labels: vec![
"Gentle Chime".to_string(),
"Soft Bell".to_string(),
"Digital Tone".to_string(),
"Subtle Alert".to_string(),
"Cow Mooing".to_string(),
"Phone Vibration".to_string(),
"Rooster Call".to_string(),
],
}
}
}
impl Default for SoundConstants {
fn default() -> Self {
Self::new()
}
}
impl Default for Config {
fn default() -> Self {
let info = os_info::get();
Self {
theme: ThemeMode::System,
executor: ExecutorConfig::Claude,
disclaimer_acknowledged: false,
onboarding_acknowledged: false,
github_login_acknowledged: false,
telemetry_acknowledged: false,
sound_alerts: true,
sound_file: SoundFile::AbstractSound4,
push_notifications: true,
editor: EditorConfig::default(),
github: GitHubConfig::default(),
analytics_enabled: None,
environment: EnvironmentInfo {
os_type: info.os_type().to_string(),
os_version: info.version().to_string(),
architecture: info.architecture().unwrap_or("unknown").to_string(),
bitness: info.bitness().to_string(),
},
workspace_dir: None,
}
}
}
impl Default for EditorConfig {
fn default() -> Self {
Self {
editor_type: EditorType::VSCode,
custom_command: None,
}
}
}
impl Default for GitHubConfig {
fn default() -> Self {
Self {
pat: None,
token: None,
username: None,
primary_email: None,
default_pr_base: Some("main".to_string()),
}
}
}
impl EditorConfig {
pub fn get_command(&self) -> Vec<String> {
match &self.editor_type {
EditorType::VSCode => vec!["code".to_string()],
EditorType::Cursor => vec!["cursor".to_string()],
EditorType::Windsurf => vec!["windsurf".to_string()],
EditorType::IntelliJ => vec!["idea".to_string()],
EditorType::Zed => vec!["zed".to_string()],
EditorType::Custom => {
if let Some(custom) = &self.custom_command {
custom.split_whitespace().map(|s| s.to_string()).collect()
} else {
vec!["code".to_string()] // fallback to VSCode
}
}
}
}
}
impl SoundFile {
pub fn to_filename(&self) -> &'static str {
match self {
SoundFile::AbstractSound1 => "abstract-sound1.wav",
SoundFile::AbstractSound2 => "abstract-sound2.wav",
SoundFile::AbstractSound3 => "abstract-sound3.wav",
SoundFile::AbstractSound4 => "abstract-sound4.wav",
SoundFile::CowMooing => "cow-mooing.wav",
SoundFile::PhoneVibration => "phone-vibration.wav",
SoundFile::Rooster => "rooster.wav",
}
}
/// Get or create a cached sound file with the embedded sound data
pub async fn get_path(&self) -> Result<PathBuf, Box<dyn std::error::Error + Send + Sync>> {
use std::io::Write;
let filename = self.to_filename();
let cache_dir = crate::utils::cache_dir();
let cached_path = cache_dir.join(format!("sound-{}", filename));
// Check if cached file already exists and is valid
if cached_path.exists() {
// Verify file has content (basic validation)
if let Ok(metadata) = std::fs::metadata(&cached_path) {
if metadata.len() > 0 {
return Ok(cached_path);
}
}
}
// File doesn't exist or is invalid, create it
let sound_data = crate::SoundAssets::get(filename)
.ok_or_else(|| format!("Embedded sound file not found: {}", filename))?
.data;
// Ensure cache directory exists
std::fs::create_dir_all(&cache_dir)
.map_err(|e| format!("Failed to create cache directory: {}", e))?;
let mut file = std::fs::File::create(&cached_path)
.map_err(|e| format!("Failed to create cached sound file: {}", e))?;
file.write_all(&sound_data)
.map_err(|e| format!("Failed to write sound data to cached file: {}", e))?;
drop(file); // Ensure file is closed
Ok(cached_path)
}
}
impl Config {
pub fn load(config_path: &PathBuf) -> anyhow::Result<Self> {
if config_path.exists() {
let content = std::fs::read_to_string(config_path)?;
// Try to deserialize as is first
match serde_json::from_str::<Config>(&content) {
Ok(mut config) => {
if config.analytics_enabled.is_none() {
config.analytics_enabled = Some(true);
}
// Always save back to ensure new fields are written to disk
config.save(config_path)?;
Ok(config)
}
Err(_) => {
// If full deserialization fails, try to merge with defaults
match Self::load_with_defaults(&content, config_path) {
Ok(config) => Ok(config),
Err(_) => {
// Even partial loading failed - backup the corrupted file
if let Err(e) = Self::backup_corrupted_config(config_path) {
tracing::error!("Failed to backup corrupted config: {}", e);
}
// Remove corrupted file and create a default config
if let Err(e) = std::fs::remove_file(config_path) {
tracing::error!("Failed to remove corrupted config file: {}", e);
}
// Create and save default config
let config = Config::default();
config.save(config_path)?;
Ok(config)
}
}
}
}
} else {
let config = Config::default();
config.save(config_path)?;
Ok(config)
}
}
fn load_with_defaults(content: &str, config_path: &PathBuf) -> anyhow::Result<Self> {
// Parse as generic JSON value
let existing_value: serde_json::Value = serde_json::from_str(content)?;
// Get default config as JSON value
let default_config = Config::default();
let default_value = serde_json::to_value(&default_config)?;
// Merge existing config with defaults
let merged_value = Self::merge_json_values(default_value, existing_value);
// Deserialize merged value back to Config
let config: Config = serde_json::from_value(merged_value)?;
// Save the updated config with any missing defaults
config.save(config_path)?;
Ok(config)
}
fn merge_json_values(
mut base: serde_json::Value,
overlay: serde_json::Value,
) -> serde_json::Value {
match (&mut base, overlay) {
(serde_json::Value::Object(base_map), serde_json::Value::Object(overlay_map)) => {
for (key, value) in overlay_map {
base_map
.entry(key)
.and_modify(|base_value| {
*base_value =
Self::merge_json_values(base_value.clone(), value.clone());
})
.or_insert(value);
}
base
}
(_, overlay) => overlay, // Use overlay value for non-objects
}
}
/// Create a backup of the corrupted config file
fn backup_corrupted_config(config_path: &PathBuf) -> anyhow::Result<()> {
let timestamp = chrono::Utc::now().format("%Y%m%d_%H%M%S");
let backup_filename = format!("config_backup_{}.json", timestamp);
let backup_path = config_path
.parent()
.unwrap_or_else(|| std::path::Path::new("."))
.join(backup_filename);
std::fs::copy(config_path, &backup_path)?;
tracing::info!("Corrupted config backed up to: {}", backup_path.display());
Ok(())
}
pub fn save(&self, config_path: &PathBuf) -> anyhow::Result<()> {
let content = serde_json::to_string_pretty(self)?;
std::fs::write(config_path, content)?;
Ok(())
}
}

View File

@@ -1,362 +0,0 @@
use chrono::{DateTime, Utc};
use git2::{BranchType, Repository};
use serde::{Deserialize, Serialize};
use sqlx::{FromRow, SqlitePool};
use ts_rs::TS;
use uuid::Uuid;
#[derive(Debug, Clone, FromRow, Serialize, Deserialize, TS)]
#[ts(export)]
pub struct Project {
pub id: Uuid,
pub name: String,
pub git_repo_path: String,
pub setup_script: Option<String>,
pub dev_script: Option<String>,
pub cleanup_script: Option<String>,
#[ts(type = "Date")]
pub created_at: DateTime<Utc>,
#[ts(type = "Date")]
pub updated_at: DateTime<Utc>,
}
#[derive(Debug, Deserialize, TS)]
#[ts(export)]
pub struct CreateProject {
pub name: String,
pub git_repo_path: String,
pub use_existing_repo: bool,
pub setup_script: Option<String>,
pub dev_script: Option<String>,
pub cleanup_script: Option<String>,
}
#[derive(Debug, Deserialize, TS)]
#[ts(export)]
pub struct UpdateProject {
pub name: Option<String>,
pub git_repo_path: Option<String>,
pub setup_script: Option<String>,
pub dev_script: Option<String>,
pub cleanup_script: Option<String>,
}
#[derive(Debug, Deserialize, TS)]
#[ts(export)]
pub struct CreateProjectFromGitHub {
pub repository_id: i64,
pub name: String,
pub clone_url: String,
pub setup_script: Option<String>,
pub dev_script: Option<String>,
pub cleanup_script: Option<String>,
}
#[derive(Debug, Serialize, TS)]
#[ts(export)]
pub struct ProjectWithBranch {
pub id: Uuid,
pub name: String,
pub git_repo_path: String,
pub setup_script: Option<String>,
pub dev_script: Option<String>,
pub cleanup_script: Option<String>,
pub current_branch: Option<String>,
#[ts(type = "Date")]
pub created_at: DateTime<Utc>,
#[ts(type = "Date")]
pub updated_at: DateTime<Utc>,
}
#[derive(Debug, Serialize, TS)]
#[ts(export)]
pub struct SearchResult {
pub path: String,
pub is_file: bool,
pub match_type: SearchMatchType,
}
#[derive(Debug, Serialize, TS)]
#[ts(export)]
pub enum SearchMatchType {
FileName,
DirectoryName,
FullPath,
}
#[derive(Debug, Serialize, TS)]
#[ts(export)]
pub struct GitBranch {
pub name: String,
pub is_current: bool,
pub is_remote: bool,
#[ts(type = "Date")]
pub last_commit_date: DateTime<Utc>,
}
#[derive(Debug, Deserialize, TS)]
#[ts(export)]
pub struct CreateBranch {
pub name: String,
pub base_branch: Option<String>,
}
impl Project {
pub async fn find_all(pool: &SqlitePool) -> Result<Vec<Self>, sqlx::Error> {
sqlx::query_as!(
Project,
r#"SELECT id as "id!: Uuid", name, git_repo_path, setup_script, dev_script, cleanup_script, created_at as "created_at!: DateTime<Utc>", updated_at as "updated_at!: DateTime<Utc>" FROM projects ORDER BY created_at DESC"#
)
.fetch_all(pool)
.await
}
pub async fn find_by_id(pool: &SqlitePool, id: Uuid) -> Result<Option<Self>, sqlx::Error> {
sqlx::query_as!(
Project,
r#"SELECT id as "id!: Uuid", name, git_repo_path, setup_script, dev_script, cleanup_script, created_at as "created_at!: DateTime<Utc>", updated_at as "updated_at!: DateTime<Utc>" FROM projects WHERE id = $1"#,
id
)
.fetch_optional(pool)
.await
}
pub async fn find_by_git_repo_path(
pool: &SqlitePool,
git_repo_path: &str,
) -> Result<Option<Self>, sqlx::Error> {
sqlx::query_as!(
Project,
r#"SELECT id as "id!: Uuid", name, git_repo_path, setup_script, dev_script, cleanup_script, created_at as "created_at!: DateTime<Utc>", updated_at as "updated_at!: DateTime<Utc>" FROM projects WHERE git_repo_path = $1"#,
git_repo_path
)
.fetch_optional(pool)
.await
}
pub async fn find_by_git_repo_path_excluding_id(
pool: &SqlitePool,
git_repo_path: &str,
exclude_id: Uuid,
) -> Result<Option<Self>, sqlx::Error> {
sqlx::query_as!(
Project,
r#"SELECT id as "id!: Uuid", name, git_repo_path, setup_script, dev_script, cleanup_script, created_at as "created_at!: DateTime<Utc>", updated_at as "updated_at!: DateTime<Utc>" FROM projects WHERE git_repo_path = $1 AND id != $2"#,
git_repo_path,
exclude_id
)
.fetch_optional(pool)
.await
}
pub async fn create(
pool: &SqlitePool,
data: &CreateProject,
project_id: Uuid,
) -> Result<Self, sqlx::Error> {
sqlx::query_as!(
Project,
r#"INSERT INTO projects (id, name, git_repo_path, setup_script, dev_script, cleanup_script) VALUES ($1, $2, $3, $4, $5, $6) RETURNING id as "id!: Uuid", name, git_repo_path, setup_script, dev_script, cleanup_script, created_at as "created_at!: DateTime<Utc>", updated_at as "updated_at!: DateTime<Utc>""#,
project_id,
data.name,
data.git_repo_path,
data.setup_script,
data.dev_script,
data.cleanup_script
)
.fetch_one(pool)
.await
}
pub async fn update(
pool: &SqlitePool,
id: Uuid,
name: String,
git_repo_path: String,
setup_script: Option<String>,
dev_script: Option<String>,
cleanup_script: Option<String>,
) -> Result<Self, sqlx::Error> {
sqlx::query_as!(
Project,
r#"UPDATE projects SET name = $2, git_repo_path = $3, setup_script = $4, dev_script = $5, cleanup_script = $6 WHERE id = $1 RETURNING id as "id!: Uuid", name, git_repo_path, setup_script, dev_script, cleanup_script, created_at as "created_at!: DateTime<Utc>", updated_at as "updated_at!: DateTime<Utc>""#,
id,
name,
git_repo_path,
setup_script,
dev_script,
cleanup_script
)
.fetch_one(pool)
.await
}
pub async fn delete(pool: &SqlitePool, id: Uuid) -> Result<u64, sqlx::Error> {
let result = sqlx::query!("DELETE FROM projects WHERE id = $1", id)
.execute(pool)
.await?;
Ok(result.rows_affected())
}
pub async fn exists(pool: &SqlitePool, id: Uuid) -> Result<bool, sqlx::Error> {
let result = sqlx::query!(
r#"
SELECT COUNT(*) as "count!: i64"
FROM projects
WHERE id = $1
"#,
id
)
.fetch_one(pool)
.await?;
Ok(result.count > 0)
}
pub fn get_current_branch(&self) -> Result<String, git2::Error> {
let repo = Repository::open(&self.git_repo_path)?;
let head = repo.head()?;
if let Some(branch_name) = head.shorthand() {
Ok(branch_name.to_string())
} else {
Ok("HEAD".to_string())
}
}
pub fn with_branch_info(self) -> ProjectWithBranch {
let current_branch = self.get_current_branch().ok();
ProjectWithBranch {
id: self.id,
name: self.name,
git_repo_path: self.git_repo_path,
setup_script: self.setup_script,
dev_script: self.dev_script,
cleanup_script: self.cleanup_script,
current_branch,
created_at: self.created_at,
updated_at: self.updated_at,
}
}
pub fn get_all_branches(&self) -> Result<Vec<GitBranch>, git2::Error> {
let repo = Repository::open(&self.git_repo_path)?;
let current_branch = self.get_current_branch().unwrap_or_default();
let mut branches = Vec::new();
// Helper function to get last commit date for a branch
let get_last_commit_date = |branch: &git2::Branch| -> Result<DateTime<Utc>, git2::Error> {
if let Some(target) = branch.get().target() {
if let Ok(commit) = repo.find_commit(target) {
let timestamp = commit.time().seconds();
return Ok(DateTime::from_timestamp(timestamp, 0).unwrap_or_else(Utc::now));
}
}
Ok(Utc::now()) // Default to now if we can't get the commit date
};
// Get local branches
let local_branches = repo.branches(Some(BranchType::Local))?;
for branch_result in local_branches {
let (branch, _) = branch_result?;
if let Some(name) = branch.name()? {
let last_commit_date = get_last_commit_date(&branch)?;
branches.push(GitBranch {
name: name.to_string(),
is_current: name == current_branch,
is_remote: false,
last_commit_date,
});
}
}
// Get remote branches
let remote_branches = repo.branches(Some(BranchType::Remote))?;
for branch_result in remote_branches {
let (branch, _) = branch_result?;
if let Some(name) = branch.name()? {
// Skip remote HEAD references
if !name.ends_with("/HEAD") {
let last_commit_date = get_last_commit_date(&branch)?;
branches.push(GitBranch {
name: name.to_string(),
is_current: false,
is_remote: true,
last_commit_date,
});
}
}
}
// Sort branches: current first, then by most recent commit date
branches.sort_by(|a, b| {
if a.is_current && !b.is_current {
std::cmp::Ordering::Less
} else if !a.is_current && b.is_current {
std::cmp::Ordering::Greater
} else {
// Sort by most recent commit date (newest first)
b.last_commit_date.cmp(&a.last_commit_date)
}
});
Ok(branches)
}
pub fn create_branch(
&self,
branch_name: &str,
base_branch: Option<&str>,
) -> Result<GitBranch, git2::Error> {
let repo = Repository::open(&self.git_repo_path)?;
// Get the base branch reference - default to current branch if not specified
let base_branch_name = match base_branch {
Some(name) => name.to_string(),
None => self
.get_current_branch()
.unwrap_or_else(|_| "HEAD".to_string()),
};
// Find the base commit
let base_commit = if base_branch_name == "HEAD" {
repo.head()?.peel_to_commit()?
} else {
// Try to find the branch as local first, then remote
let base_ref = if let Ok(local_ref) =
repo.find_reference(&format!("refs/heads/{}", base_branch_name))
{
local_ref
} else if let Ok(remote_ref) =
repo.find_reference(&format!("refs/remotes/{}", base_branch_name))
{
remote_ref
} else {
return Err(git2::Error::from_str(&format!(
"Base branch '{}' not found",
base_branch_name
)));
};
base_ref.peel_to_commit()?
};
// Create the new branch
let _new_branch = repo.branch(branch_name, &base_commit, false)?;
// Get the commit date for the new branch (same as base commit)
let last_commit_date = {
let timestamp = base_commit.time().seconds();
DateTime::from_timestamp(timestamp, 0).unwrap_or_else(Utc::now)
};
Ok(GitBranch {
name: branch_name.to_string(),
is_current: false,
is_remote: false,
last_commit_date,
})
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,262 +0,0 @@
use axum::{
extract::{Request, State},
middleware::Next,
response::{Json as ResponseJson, Response},
routing::{get, post},
Json, Router,
};
use ts_rs::TS;
use crate::{app_state::AppState, models::ApiResponse};
pub fn auth_router() -> Router<AppState> {
Router::new()
.route("/auth/github/device/start", post(device_start))
.route("/auth/github/device/poll", post(device_poll))
.route("/auth/github/check", get(github_check_token))
}
#[derive(serde::Deserialize)]
struct DeviceStartRequest {}
#[derive(serde::Serialize, TS)]
#[ts(export)]
pub struct DeviceStartResponse {
pub device_code: String,
pub user_code: String,
pub verification_uri: String,
pub expires_in: u32,
pub interval: u32,
}
#[derive(serde::Deserialize)]
struct DevicePollRequest {
device_code: String,
}
/// POST /auth/github/device/start
async fn device_start() -> ResponseJson<ApiResponse<DeviceStartResponse>> {
let client_id = option_env!("GITHUB_CLIENT_ID").unwrap_or("Ov23li9bxz3kKfPOIsGm");
let params = [("client_id", client_id), ("scope", "user:email,repo")];
let client = reqwest::Client::new();
let res = client
.post("https://github.com/login/device/code")
.header("Accept", "application/json")
.form(&params)
.send()
.await;
let res = match res {
Ok(r) => r,
Err(e) => {
return ResponseJson(ApiResponse::error(&format!(
"Failed to contact GitHub: {e}"
)));
}
};
let json: serde_json::Value = match res.json().await {
Ok(j) => j,
Err(e) => {
return ResponseJson(ApiResponse::error(&format!(
"Failed to parse GitHub response: {e}"
)));
}
};
if let (
Some(device_code),
Some(user_code),
Some(verification_uri),
Some(expires_in),
Some(interval),
) = (
json.get("device_code").and_then(|v| v.as_str()),
json.get("user_code").and_then(|v| v.as_str()),
json.get("verification_uri").and_then(|v| v.as_str()),
json.get("expires_in").and_then(|v| v.as_u64()),
json.get("interval").and_then(|v| v.as_u64()),
) {
ResponseJson(ApiResponse::success(DeviceStartResponse {
device_code: device_code.to_string(),
user_code: user_code.to_string(),
verification_uri: verification_uri.to_string(),
expires_in: expires_in.try_into().unwrap_or(600),
interval: interval.try_into().unwrap_or(5),
}))
} else {
ResponseJson(ApiResponse::error(&format!("GitHub error: {}", json)))
}
}
/// POST /auth/github/device/poll
async fn device_poll(
State(app_state): State<AppState>,
Json(payload): Json<DevicePollRequest>,
) -> ResponseJson<ApiResponse<String>> {
let client_id = option_env!("GITHUB_CLIENT_ID").unwrap_or("Ov23li9bxz3kKfPOIsGm");
let params = [
("client_id", client_id),
("device_code", payload.device_code.as_str()),
("grant_type", "urn:ietf:params:oauth:grant-type:device_code"),
];
let client = reqwest::Client::new();
let res = client
.post("https://github.com/login/oauth/access_token")
.header("Accept", "application/json")
.form(&params)
.send()
.await;
let res = match res {
Ok(r) => r,
Err(e) => {
return ResponseJson(ApiResponse::error(&format!(
"Failed to contact GitHub: {e}"
)));
}
};
let json: serde_json::Value = match res.json().await {
Ok(j) => j,
Err(e) => {
return ResponseJson(ApiResponse::error(&format!(
"Failed to parse GitHub response: {e}"
)));
}
};
if let Some(error) = json.get("error").and_then(|v| v.as_str()) {
// Not authorized yet, or other error
return ResponseJson(ApiResponse::error(error));
}
let access_token = json.get("access_token").and_then(|v| v.as_str());
if let Some(access_token) = access_token {
// Fetch user info
let user_res = client
.get("https://api.github.com/user")
.bearer_auth(access_token)
.header("User-Agent", "vibe-kanban-app")
.send()
.await;
let user_json: serde_json::Value = match user_res {
Ok(res) => match res.json().await {
Ok(json) => json,
Err(e) => {
return ResponseJson(ApiResponse::error(&format!(
"Failed to parse GitHub user response: {e}"
)));
}
},
Err(e) => {
return ResponseJson(ApiResponse::error(&format!(
"Failed to fetch user info: {e}"
)));
}
};
let username = user_json
.get("login")
.and_then(|v| v.as_str())
.map(|s| s.to_string());
// Fetch user emails
let emails_res = client
.get("https://api.github.com/user/emails")
.bearer_auth(access_token)
.header("User-Agent", "vibe-kanban-app")
.send()
.await;
let emails_json: serde_json::Value = match emails_res {
Ok(res) => match res.json().await {
Ok(json) => json,
Err(e) => {
return ResponseJson(ApiResponse::error(&format!(
"Failed to parse GitHub emails response: {e}"
)));
}
},
Err(e) => {
return ResponseJson(ApiResponse::error(&format!(
"Failed to fetch user emails: {e}"
)));
}
};
let primary_email = emails_json
.as_array()
.and_then(|arr| {
arr.iter()
.find(|email| {
email
.get("primary")
.and_then(|v| v.as_bool())
.unwrap_or(false)
})
.and_then(|email| email.get("email").and_then(|v| v.as_str()))
})
.map(|s| s.to_string());
// Save to config
{
let mut config = app_state.get_config().write().await;
config.github.username = username.clone();
config.github.primary_email = primary_email.clone();
config.github.token = Some(access_token.to_string());
config.github_login_acknowledged = true; // Also acknowledge the GitHub login step
let config_path = crate::utils::config_path();
if config.save(&config_path).is_err() {
return ResponseJson(ApiResponse::error("Failed to save config"));
}
}
app_state.update_sentry_scope().await;
// Identify user in PostHog
let mut props = serde_json::Map::new();
if let Some(ref username) = username {
props.insert(
"username".to_string(),
serde_json::Value::String(username.clone()),
);
}
if let Some(ref email) = primary_email {
props.insert(
"email".to_string(),
serde_json::Value::String(email.clone()),
);
}
{
let props = serde_json::Value::Object(props);
app_state
.track_analytics_event("$identify", Some(props))
.await;
}
ResponseJson(ApiResponse::success("GitHub login successful".to_string()))
} else {
ResponseJson(ApiResponse::error("No access token yet"))
}
}
/// GET /auth/github/check
async fn github_check_token(State(app_state): State<AppState>) -> ResponseJson<ApiResponse<()>> {
let config = app_state.get_config().read().await;
let token = config.github.token.clone();
drop(config);
if let Some(token) = token {
let client = reqwest::Client::new();
let res = client
.get("https://api.github.com/user")
.bearer_auth(&token)
.header("User-Agent", "vibe-kanban-app")
.send()
.await;
match res {
Ok(r) if r.status().is_success() => ResponseJson(ApiResponse::success(())),
_ => ResponseJson(ApiResponse::error("github_token_invalid")),
}
} else {
ResponseJson(ApiResponse::error("github_token_invalid"))
}
}
/// Middleware to set Sentry user context for every request
pub async fn sentry_user_context_middleware(
State(app_state): State<AppState>,
req: Request,
next: Next,
) -> Response {
app_state.update_sentry_scope().await;
next.run(req).await
}

View File

@@ -1,335 +0,0 @@
use std::collections::HashMap;
use axum::{
extract::{Query, State},
response::Json as ResponseJson,
routing::{get, post},
Json, Router,
};
use serde::{Deserialize, Serialize};
use serde_json::Value;
use tokio::fs;
use ts_rs::TS;
use crate::{
app_state::AppState,
executor::ExecutorConfig,
models::{
config::{Config, EditorConstants, SoundConstants},
ApiResponse, Environment,
},
utils,
};
pub fn config_router() -> Router<AppState> {
Router::new()
.route("/config", get(get_config))
.route("/config", post(update_config))
.route("/config/constants", get(get_config_constants))
.route("/mcp-servers", get(get_mcp_servers))
.route("/mcp-servers", post(update_mcp_servers))
}
async fn get_config(State(app_state): State<AppState>) -> ResponseJson<ApiResponse<Config>> {
let mut config = app_state.get_config().read().await.clone();
// Update environment info dynamically
let info = os_info::get();
config.environment.os_type = info.os_type().to_string();
config.environment.os_version = info.version().to_string();
config.environment.architecture = info.architecture().unwrap_or("unknown").to_string();
config.environment.bitness = info.bitness().to_string();
ResponseJson(ApiResponse::success(config))
}
async fn update_config(
State(app_state): State<AppState>,
Json(new_config): Json<Config>,
) -> ResponseJson<ApiResponse<Config>> {
let config_path = utils::config_path();
match new_config.save(&config_path) {
Ok(_) => {
let mut config = app_state.get_config().write().await;
*config = new_config.clone();
drop(config);
app_state
.update_analytics_config(new_config.analytics_enabled.unwrap_or(true))
.await;
ResponseJson(ApiResponse::success(new_config))
}
Err(e) => ResponseJson(ApiResponse::error(&format!("Failed to save config: {}", e))),
}
}
#[derive(Debug, Serialize, Deserialize, TS)]
#[ts(export)]
pub struct ConfigConstants {
pub editor: EditorConstants,
pub sound: SoundConstants,
pub mode: Environment,
}
async fn get_config_constants(
State(app_state): State<AppState>,
) -> ResponseJson<ApiResponse<ConfigConstants>> {
let constants = ConfigConstants {
editor: EditorConstants::new(),
sound: SoundConstants::new(),
mode: app_state.mode,
};
ResponseJson(ApiResponse::success(constants))
}
#[derive(Debug, Deserialize)]
struct McpServerQuery {
executor: Option<String>,
}
/// Common logic for resolving executor configuration and validating MCP support
fn resolve_executor_config(
query_executor: Option<String>,
saved_config: &ExecutorConfig,
) -> Result<ExecutorConfig, String> {
let executor_config = match query_executor {
Some(executor_type) => executor_type
.parse::<ExecutorConfig>()
.map_err(|e| e.to_string())?,
None => saved_config.clone(),
};
if !executor_config.supports_mcp() {
return Err(format!(
"{} executor does not support MCP configuration",
executor_config.display_name()
));
}
Ok(executor_config)
}
async fn get_mcp_servers(
State(app_state): State<AppState>,
Query(query): Query<McpServerQuery>,
) -> ResponseJson<ApiResponse<Value>> {
let saved_config = {
let config = app_state.get_config().read().await;
config.executor.clone()
};
let executor_config = match resolve_executor_config(query.executor, &saved_config) {
Ok(config) => config,
Err(message) => {
return ResponseJson(ApiResponse::error(&message));
}
};
// Get the config file path for this executor
let config_path = match executor_config.config_path() {
Some(path) => path,
None => {
return ResponseJson(ApiResponse::error("Could not determine config file path"));
}
};
match read_mcp_servers_from_config(&config_path, &executor_config).await {
Ok(servers) => {
let response_data = serde_json::json!({
"servers": servers,
"config_path": config_path.to_string_lossy().to_string()
});
ResponseJson(ApiResponse::success(response_data))
}
Err(e) => ResponseJson(ApiResponse::error(&format!(
"Failed to read MCP servers: {}",
e
))),
}
}
async fn update_mcp_servers(
State(app_state): State<AppState>,
Query(query): Query<McpServerQuery>,
Json(new_servers): Json<HashMap<String, Value>>,
) -> ResponseJson<ApiResponse<String>> {
let saved_config = {
let config = app_state.get_config().read().await;
config.executor.clone()
};
let executor_config = match resolve_executor_config(query.executor, &saved_config) {
Ok(config) => config,
Err(message) => {
return ResponseJson(ApiResponse::error(&message));
}
};
// Get the config file path for this executor
let config_path = match executor_config.config_path() {
Some(path) => path,
None => {
return ResponseJson(ApiResponse::error("Could not determine config file path"));
}
};
match update_mcp_servers_in_config(&config_path, &executor_config, new_servers).await {
Ok(message) => ResponseJson(ApiResponse::success(message)),
Err(e) => ResponseJson(ApiResponse::error(&format!(
"Failed to update MCP servers: {}",
e
))),
}
}
async fn update_mcp_servers_in_config(
file_path: &std::path::Path,
executor_config: &ExecutorConfig,
new_servers: HashMap<String, Value>,
) -> Result<String, Box<dyn std::error::Error + Send + Sync>> {
// Ensure parent directory exists
if let Some(parent) = file_path.parent() {
fs::create_dir_all(parent).await?;
}
// Read existing config file or create empty object if it doesn't exist
let file_content = fs::read_to_string(file_path)
.await
.unwrap_or_else(|_| "{}".to_string());
let mut config: Value = serde_json::from_str(&file_content)?;
// Get the attribute path for MCP servers
let mcp_path = executor_config.mcp_attribute_path().unwrap();
// Get the current server count for comparison
let old_servers = get_mcp_servers_from_config_path(&config, &mcp_path).len();
// Set the MCP servers using the correct attribute path
set_mcp_servers_in_config_path(&mut config, &mcp_path, &new_servers)?;
// Write the updated config back to file
let updated_content = serde_json::to_string_pretty(&config)?;
fs::write(file_path, updated_content).await?;
let new_count = new_servers.len();
let message = match (old_servers, new_count) {
(0, 0) => "No MCP servers configured".to_string(),
(0, n) => format!("Added {} MCP server(s)", n),
(old, new) if old == new => format!("Updated MCP server configuration ({} server(s))", new),
(old, new) => format!(
"Updated MCP server configuration (was {}, now {})",
old, new
),
};
Ok(message)
}
async fn read_mcp_servers_from_config(
file_path: &std::path::Path,
executor_config: &ExecutorConfig,
) -> Result<HashMap<String, Value>, Box<dyn std::error::Error + Send + Sync>> {
// Read the config file, return empty if it doesn't exist
let file_content = fs::read_to_string(file_path)
.await
.unwrap_or_else(|_| "{}".to_string());
let config: Value = serde_json::from_str(&file_content)?;
// Get the attribute path for MCP servers
let mcp_path = executor_config.mcp_attribute_path().unwrap();
// Get the servers using the correct attribute path
let servers = get_mcp_servers_from_config_path(&config, &mcp_path);
Ok(servers)
}
/// Helper function to get MCP servers from config using a path
fn get_mcp_servers_from_config_path(config: &Value, path: &[&str]) -> HashMap<String, Value> {
// Special handling for AMP - use flat key structure
if path.len() == 2 && path[0] == "amp" && path[1] == "mcpServers" {
let flat_key = format!("{}.{}", path[0], path[1]);
let current = match config.get(&flat_key) {
Some(val) => val,
None => return HashMap::new(),
};
// Extract the servers object
match current.as_object() {
Some(servers) => servers
.iter()
.map(|(k, v)| (k.clone(), v.clone()))
.collect(),
None => HashMap::new(),
}
} else {
let mut current = config;
// Navigate to the target location
for &part in path {
current = match current.get(part) {
Some(val) => val,
None => return HashMap::new(),
};
}
// Extract the servers object
match current.as_object() {
Some(servers) => servers
.iter()
.map(|(k, v)| (k.clone(), v.clone()))
.collect(),
None => HashMap::new(),
}
}
}
/// Helper function to set MCP servers in config using a path
fn set_mcp_servers_in_config_path(
config: &mut Value,
path: &[&str],
servers: &HashMap<String, Value>,
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
// Ensure config is an object
if !config.is_object() {
*config = serde_json::json!({});
}
// Special handling for AMP - use flat key structure
if path.len() == 2 && path[0] == "amp" && path[1] == "mcpServers" {
let flat_key = format!("{}.{}", path[0], path[1]);
config
.as_object_mut()
.unwrap()
.insert(flat_key, serde_json::to_value(servers)?);
return Ok(());
}
let mut current = config;
// Navigate/create the nested structure (all parts except the last)
for &part in &path[..path.len() - 1] {
if current.get(part).is_none() {
current
.as_object_mut()
.unwrap()
.insert(part.to_string(), serde_json::json!({}));
}
current = current.get_mut(part).unwrap();
if !current.is_object() {
*current = serde_json::json!({});
}
}
// Set the final attribute
let final_attr = path.last().unwrap();
current
.as_object_mut()
.unwrap()
.insert(final_attr.to_string(), serde_json::to_value(servers)?);
Ok(())
}

View File

@@ -1,185 +0,0 @@
use std::{
fs,
path::{Path, PathBuf},
};
use axum::{
extract::Query, http::StatusCode, response::Json as ResponseJson, routing::get, Router,
};
use serde::{Deserialize, Serialize};
use ts_rs::TS;
use crate::{app_state::AppState, models::ApiResponse};
#[derive(Debug, Serialize, TS)]
#[ts(export)]
pub struct DirectoryEntry {
pub name: String,
pub path: String,
pub is_directory: bool,
pub is_git_repo: bool,
}
#[derive(Debug, Serialize, TS)]
#[ts(export)]
pub struct DirectoryListResponse {
pub entries: Vec<DirectoryEntry>,
pub current_path: String,
}
#[derive(Debug, Deserialize)]
pub struct ListDirectoryQuery {
path: Option<String>,
}
pub async fn list_directory(
Query(query): Query<ListDirectoryQuery>,
) -> Result<ResponseJson<ApiResponse<DirectoryListResponse>>, StatusCode> {
let path_str = query.path.unwrap_or_else(|| {
// Default to user's home directory
dirs::home_dir()
.or_else(dirs::desktop_dir)
.or_else(dirs::document_dir)
.unwrap_or_else(|| {
if cfg!(windows) {
std::env::var("USERPROFILE")
.map(PathBuf::from)
.unwrap_or_else(|_| PathBuf::from("C:\\"))
} else {
PathBuf::from("/")
}
})
.to_string_lossy()
.to_string()
});
let path = Path::new(&path_str);
if !path.exists() {
return Ok(ResponseJson(ApiResponse::error("Directory does not exist")));
}
if !path.is_dir() {
return Ok(ResponseJson(ApiResponse::error("Path is not a directory")));
}
match fs::read_dir(path) {
Ok(entries) => {
let mut directory_entries = Vec::new();
for entry in entries.flatten() {
let path = entry.path();
let metadata = entry.metadata().ok();
if let Some(name) = path.file_name().and_then(|n| n.to_str()) {
// Skip hidden files/directories
if name.starts_with('.') && name != ".." {
continue;
}
let is_directory = metadata.is_some_and(|m| m.is_dir());
let is_git_repo = if is_directory {
path.join(".git").exists()
} else {
false
};
directory_entries.push(DirectoryEntry {
name: name.to_string(),
path: path.to_string_lossy().to_string(),
is_directory,
is_git_repo,
});
}
}
// Sort: directories first, then files, both alphabetically
directory_entries.sort_by(|a, b| match (a.is_directory, b.is_directory) {
(true, false) => std::cmp::Ordering::Less,
(false, true) => std::cmp::Ordering::Greater,
_ => a.name.to_lowercase().cmp(&b.name.to_lowercase()),
});
Ok(ResponseJson(ApiResponse::success(DirectoryListResponse {
entries: directory_entries,
current_path: path.to_string_lossy().to_string(),
})))
}
Err(e) => {
tracing::error!("Failed to read directory: {}", e);
Ok(ResponseJson(ApiResponse::error(&format!(
"Failed to read directory: {}",
e
))))
}
}
}
pub async fn validate_git_path(
Query(query): Query<ListDirectoryQuery>,
) -> Result<ResponseJson<ApiResponse<bool>>, StatusCode> {
let path_str = query.path.ok_or(StatusCode::BAD_REQUEST)?;
let path = Path::new(&path_str);
// Check if path exists and is a git repo
let is_valid_git_repo = path.exists() && path.is_dir() && path.join(".git").exists();
Ok(ResponseJson(ApiResponse::success(is_valid_git_repo)))
}
pub async fn create_git_repo(
Query(query): Query<ListDirectoryQuery>,
) -> Result<ResponseJson<ApiResponse<()>>, StatusCode> {
let path_str = query.path.ok_or(StatusCode::BAD_REQUEST)?;
let path = Path::new(&path_str);
// Create directory if it doesn't exist
if !path.exists() {
if let Err(e) = fs::create_dir_all(path) {
tracing::error!("Failed to create directory: {}", e);
return Ok(ResponseJson(ApiResponse::error(&format!(
"Failed to create directory: {}",
e
))));
}
}
// Check if it's already a git repo
if path.join(".git").exists() {
return Ok(ResponseJson(ApiResponse::success(())));
}
// Initialize git repository
match std::process::Command::new("git")
.arg("init")
.current_dir(path)
.output()
{
Ok(output) => {
if output.status.success() {
Ok(ResponseJson(ApiResponse::success(())))
} else {
let error_msg = String::from_utf8_lossy(&output.stderr);
tracing::error!("Git init failed: {}", error_msg);
Ok(ResponseJson(ApiResponse::error(&format!(
"Git init failed: {}",
error_msg
))))
}
}
Err(e) => {
tracing::error!("Failed to run git init: {}", e);
Ok(ResponseJson(ApiResponse::error(&format!(
"Failed to run git init: {}",
e
))))
}
}
}
pub fn filesystem_router() -> Router<AppState> {
Router::new()
.route("/filesystem/list", get(list_directory))
.route("/filesystem/validate-git", get(validate_git_path))
.route("/filesystem/create-git", get(create_git_repo))
}

View File

@@ -1,10 +0,0 @@
pub mod auth;
pub mod config;
pub mod filesystem;
pub mod github;
pub mod health;
pub mod projects;
pub mod stream;
pub mod task_attempts;
pub mod task_templates;
pub mod tasks;

View File

@@ -1,244 +0,0 @@
use std::time::Duration;
use axum::{
extract::{Path, Query, State},
response::sse::{Event, Sse},
routing::get,
Router,
};
use futures_util::stream::Stream;
use serde::{Deserialize, Serialize};
use serde_json::Value;
use uuid::Uuid;
use crate::{
app_state::AppState,
executors::gemini::GeminiExecutor,
models::execution_process::{ExecutionProcess, ExecutionProcessStatus},
};
/// Interval for DB tail polling (ms) - now blazing fast for real-time updates
const TAIL_INTERVAL_MS: u64 = 100;
/// Structured batch data for SSE streaming
#[derive(Serialize)]
struct BatchData {
batch_id: u64,
patches: Vec<Value>,
}
/// Query parameters for resumable SSE streaming
#[derive(Debug, Deserialize)]
pub struct StreamQuery {
/// Optional cursor to resume streaming from specific batch ID
since_batch_id: Option<u64>,
}
/// SSE handler for incremental normalized-logs JSON-Patch streaming
///
/// GET /api/projects/:project_id/execution-processes/:process_id/normalized-logs/stream?since_batch_id=123
pub async fn normalized_logs_stream(
Path((_project_id, process_id)): Path<(Uuid, Uuid)>,
Query(query): Query<StreamQuery>,
State(app_state): State<AppState>,
) -> Sse<impl Stream<Item = Result<Event, axum::Error>>> {
// Check if this is a Gemini executor (only executor with streaming support)
let is_gemini = match ExecutionProcess::find_by_id(&app_state.db_pool, process_id).await {
Ok(Some(process)) => process.executor_type.as_deref() == Some("gemini"),
_ => {
tracing::warn!(
"Failed to find execution process {} for SSE streaming",
process_id
);
false
}
};
// Use blazing fast polling interval for Gemini (only streaming executor)
let poll_interval = if is_gemini { 50 } else { TAIL_INTERVAL_MS };
// Stream that yields patches from WAL (fast-path) or DB tail (fallback)
let stream = async_stream::stream! {
// Track previous stdout length and entry count for database polling fallback
let mut last_len: usize = 0;
let mut last_entry_count: usize = query.since_batch_id.unwrap_or(1) as usize;
let mut interval = tokio::time::interval(Duration::from_millis(poll_interval));
let mut last_seen_batch_id: u64 = query.since_batch_id.unwrap_or(0); // Cursor for WAL streaming
// Monotonic batch ID for fallback polling (always start at 1)
let since = query.since_batch_id.unwrap_or(1);
let mut fallback_batch_id: u64 = since + 1;
// Fast catch-up phase for resumable streaming
if let Some(since_batch) = query.since_batch_id {
if !is_gemini {
// Load current process state to get all available entries
if let Ok(Some(proc)) = ExecutionProcess::find_by_id(&app_state.db_pool, process_id).await {
if let Some(stdout) = &proc.stdout {
// Create executor and normalize logs to get all entries
if let Some(executor) = proc.executor_type
.as_deref()
.unwrap_or("unknown")
.parse::<crate::executor::ExecutorConfig>()
.ok()
.map(|cfg| cfg.create_executor())
{
if let Ok(normalized) = executor.normalize_logs(stdout, &proc.working_directory) {
// Send all entries after since_batch_id immediately
let start_entry = since_batch as usize;
let catch_up_entries = normalized.entries.get(start_entry..).unwrap_or(&[]);
for (i, entry) in catch_up_entries.iter().enumerate() {
let batch_data = BatchData {
batch_id: since_batch + 1 + i as u64,
patches: vec![serde_json::json!({
"op": "add",
"path": "/entries/-",
"value": entry
})],
};
yield Ok(Event::default().event("patch").data(serde_json::to_string(&batch_data).unwrap_or_default()));
}
// Update cursors to current state
last_entry_count = normalized.entries.len();
fallback_batch_id = since_batch + 1 + catch_up_entries.len() as u64;
last_len = stdout.len();
}
}
}
}
}
}
loop {
interval.tick().await;
// Check process status first
let process_status = match ExecutionProcess::find_by_id(&app_state.db_pool, process_id).await {
Ok(Some(proc)) => proc.status,
_ => {
tracing::warn!("Execution process {} not found during SSE streaming", process_id);
break;
}
};
if is_gemini {
// Gemini streaming: Read from Gemini WAL using cursor
let cursor = if last_seen_batch_id == 0 { None } else { Some(last_seen_batch_id) };
if let Some(new_batches) = GeminiExecutor::get_wal_batches(process_id, cursor) {
// Send any new batches since last cursor
for batch in &new_batches {
// Send full batch including batch_id for cursor tracking
let batch_data = BatchData {
batch_id: batch.batch_id,
patches: batch.patches.clone(),
};
let json = serde_json::to_string(&batch_data).unwrap_or_default();
yield Ok(Event::default().event("patch").data(json));
// Update cursor to highest batch_id seen
last_seen_batch_id = batch.batch_id.max(last_seen_batch_id);
}
}
} else {
// Fallback: Database polling for non-streaming executors
// 1. Load the process
let proc = match ExecutionProcess::find_by_id(&app_state.db_pool, process_id)
.await
.ok()
.flatten()
{
Some(p) => p,
None => {
tracing::warn!("Execution process {} not found during SSE polling", process_id);
continue;
}
};
// 2. Grab the stdout and check if there's new content
let stdout = match proc.stdout {
Some(ref s) if s.len() > last_len && !s[last_len..].trim().is_empty() => s.clone(),
_ => continue, // no new output
};
// 3. Instantiate the right executor
let executor = match proc.executor_type
.as_deref()
.unwrap_or("unknown")
.parse::<crate::executor::ExecutorConfig>()
.ok()
.map(|cfg| cfg.create_executor())
{
Some(exec) => exec,
None => {
tracing::warn!(
"Unknown executor '{}' for process {}",
proc.executor_type.unwrap_or_default(),
process_id
);
continue;
}
};
// 4. Normalize logs
let normalized = match executor.normalize_logs(&stdout, &proc.working_directory) {
Ok(norm) => norm,
Err(err) => {
tracing::error!(
"Failed to normalize logs for process {}: {}",
process_id,
err
);
continue;
}
};
if last_entry_count > normalized.entries.len() {
continue;
}
// 5. Compute patches for any new entries
if last_entry_count >= normalized.entries.len() {
continue;
}
let new_entries = [&normalized.entries[last_entry_count]];
let patches: Vec<Value> = new_entries
.iter()
.map(|entry| serde_json::json!({
"op": "add",
"path": "/entries/-",
"value": entry
}))
.collect();
// 6. Emit the batch
let batch_data = BatchData {
batch_id: fallback_batch_id - 1,
patches,
};
let json = serde_json::to_string(&batch_data).unwrap_or_default();
yield Ok(Event::default().event("patch").data(json));
// 7. Update our cursors
fallback_batch_id += 1;
last_entry_count += 1;
last_len = stdout.len();
}
// Stop streaming when process completed
if process_status != ExecutionProcessStatus::Running {
break;
}
}
};
Sse::new(stream).keep_alive(axum::response::sse::KeepAlive::default())
}
/// Router exposing `/normalized-logs/stream`
pub fn stream_router() -> Router<AppState> {
Router::new().route(
"/projects/:project_id/execution-processes/:process_id/normalized-logs/stream",
get(normalized_logs_stream),
)
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,147 +0,0 @@
use axum::{
extract::{Path, State},
http::StatusCode,
response::IntoResponse,
Extension, Json,
};
use uuid::Uuid;
use crate::{
app_state::AppState,
models::{
api_response::ApiResponse,
task_template::{CreateTaskTemplate, TaskTemplate, UpdateTaskTemplate},
},
};
pub async fn list_templates(
State(state): State<AppState>,
) -> Result<impl IntoResponse, (StatusCode, Json<ApiResponse<()>>)> {
match TaskTemplate::find_all(&state.db_pool).await {
Ok(templates) => Ok(Json(ApiResponse::success(templates))),
Err(e) => Err((
StatusCode::INTERNAL_SERVER_ERROR,
Json(ApiResponse::error(&format!(
"Failed to fetch templates: {}",
e
))),
)),
}
}
pub async fn list_project_templates(
State(state): State<AppState>,
Path(project_id): Path<Uuid>,
) -> Result<impl IntoResponse, (StatusCode, Json<ApiResponse<()>>)> {
match TaskTemplate::find_by_project_id(&state.db_pool, Some(project_id)).await {
Ok(templates) => Ok(Json(ApiResponse::success(templates))),
Err(e) => Err((
StatusCode::INTERNAL_SERVER_ERROR,
Json(ApiResponse::error(&format!(
"Failed to fetch templates: {}",
e
))),
)),
}
}
pub async fn list_global_templates(
State(state): State<AppState>,
) -> Result<impl IntoResponse, (StatusCode, Json<ApiResponse<()>>)> {
match TaskTemplate::find_by_project_id(&state.db_pool, None).await {
Ok(templates) => Ok(Json(ApiResponse::success(templates))),
Err(e) => Err((
StatusCode::INTERNAL_SERVER_ERROR,
Json(ApiResponse::error(&format!(
"Failed to fetch global templates: {}",
e
))),
)),
}
}
pub async fn get_template(
Extension(template): Extension<TaskTemplate>,
) -> Result<impl IntoResponse, (StatusCode, Json<ApiResponse<()>>)> {
Ok(Json(ApiResponse::success(template)))
}
pub async fn create_template(
State(state): State<AppState>,
Json(payload): Json<CreateTaskTemplate>,
) -> Result<impl IntoResponse, (StatusCode, Json<ApiResponse<()>>)> {
match TaskTemplate::create(&state.db_pool, &payload).await {
Ok(template) => Ok((StatusCode::CREATED, Json(ApiResponse::success(template)))),
Err(e) => {
if e.to_string().contains("UNIQUE constraint failed") {
Err((
StatusCode::CONFLICT,
Json(ApiResponse::error(
"A template with this name already exists in this scope",
)),
))
} else {
Err((
StatusCode::INTERNAL_SERVER_ERROR,
Json(ApiResponse::error(&format!(
"Failed to create template: {}",
e
))),
))
}
}
}
}
pub async fn update_template(
Extension(template): Extension<TaskTemplate>,
State(state): State<AppState>,
Json(payload): Json<UpdateTaskTemplate>,
) -> Result<impl IntoResponse, (StatusCode, Json<ApiResponse<()>>)> {
match TaskTemplate::update(&state.db_pool, template.id, &payload).await {
Ok(template) => Ok(Json(ApiResponse::success(template))),
Err(e) => {
if matches!(e, sqlx::Error::RowNotFound) {
Err((
StatusCode::NOT_FOUND,
Json(ApiResponse::error("Template not found")),
))
} else if e.to_string().contains("UNIQUE constraint failed") {
Err((
StatusCode::CONFLICT,
Json(ApiResponse::error(
"A template with this name already exists in this scope",
)),
))
} else {
Err((
StatusCode::INTERNAL_SERVER_ERROR,
Json(ApiResponse::error(&format!(
"Failed to update template: {}",
e
))),
))
}
}
}
}
pub async fn delete_template(
Extension(template): Extension<TaskTemplate>,
State(state): State<AppState>,
) -> Result<impl IntoResponse, (StatusCode, Json<ApiResponse<()>>)> {
match TaskTemplate::delete(&state.db_pool, template.id).await {
Ok(0) => Err((
StatusCode::NOT_FOUND,
Json(ApiResponse::error("Template not found")),
)),
Ok(_) => Ok(Json(ApiResponse::success(()))),
Err(e) => Err((
StatusCode::INTERNAL_SERVER_ERROR,
Json(ApiResponse::error(&format!(
"Failed to delete template: {}",
e
))),
)),
}
}

View File

@@ -1,277 +0,0 @@
use axum::{
extract::State, http::StatusCode, response::Json as ResponseJson, routing::get, Extension,
Json, Router,
};
use uuid::Uuid;
use crate::{
app_state::AppState,
execution_monitor,
models::{
project::Project,
task::{CreateTask, CreateTaskAndStart, Task, TaskWithAttemptStatus, UpdateTask},
task_attempt::{CreateTaskAttempt, TaskAttempt},
ApiResponse,
},
};
pub async fn get_project_tasks(
Extension(project): Extension<Project>,
State(app_state): State<AppState>,
) -> Result<ResponseJson<ApiResponse<Vec<TaskWithAttemptStatus>>>, StatusCode> {
match Task::find_by_project_id_with_attempt_status(&app_state.db_pool, project.id).await {
Ok(tasks) => Ok(ResponseJson(ApiResponse::success(tasks))),
Err(e) => {
tracing::error!("Failed to fetch tasks for project {}: {}", project.id, e);
Err(StatusCode::INTERNAL_SERVER_ERROR)
}
}
}
pub async fn get_task(
Extension(task): Extension<Task>,
) -> Result<ResponseJson<ApiResponse<Task>>, StatusCode> {
Ok(ResponseJson(ApiResponse::success(task)))
}
pub async fn create_task(
Extension(project): Extension<Project>,
State(app_state): State<AppState>,
Json(mut payload): Json<CreateTask>,
) -> Result<ResponseJson<ApiResponse<Task>>, StatusCode> {
let id = Uuid::new_v4();
// Ensure the project_id in the payload matches the project from middleware
payload.project_id = project.id;
tracing::debug!(
"Creating task '{}' in project {}",
payload.title,
project.id
);
match Task::create(&app_state.db_pool, &payload, id).await {
Ok(task) => {
// Track task creation event
app_state
.track_analytics_event(
"task_created",
Some(serde_json::json!({
"task_id": task.id.to_string(),
"project_id": project.id.to_string(),
"has_description": task.description.is_some(),
})),
)
.await;
Ok(ResponseJson(ApiResponse::success(task)))
}
Err(e) => {
tracing::error!("Failed to create task: {}", e);
Err(StatusCode::INTERNAL_SERVER_ERROR)
}
}
}
pub async fn create_task_and_start(
Extension(project): Extension<Project>,
State(app_state): State<AppState>,
Json(mut payload): Json<CreateTaskAndStart>,
) -> Result<ResponseJson<ApiResponse<Task>>, StatusCode> {
let task_id = Uuid::new_v4();
// Ensure the project_id in the payload matches the project from middleware
payload.project_id = project.id;
tracing::debug!(
"Creating and starting task '{}' in project {}",
payload.title,
project.id
);
// Create the task first
let create_task_payload = CreateTask {
project_id: payload.project_id,
title: payload.title.clone(),
description: payload.description.clone(),
parent_task_attempt: payload.parent_task_attempt,
};
let task = match Task::create(&app_state.db_pool, &create_task_payload, task_id).await {
Ok(task) => task,
Err(e) => {
tracing::error!("Failed to create task: {}", e);
return Err(StatusCode::INTERNAL_SERVER_ERROR);
}
};
// Create task attempt
let executor_string = payload.executor.as_ref().map(|exec| exec.to_string());
let attempt_payload = CreateTaskAttempt {
executor: executor_string.clone(),
base_branch: None, // Not supported in task creation endpoint, only in task attempts
};
match TaskAttempt::create(&app_state.db_pool, &attempt_payload, task_id).await {
Ok(attempt) => {
app_state
.track_analytics_event(
"task_created",
Some(serde_json::json!({
"task_id": task.id.to_string(),
"project_id": project.id.to_string(),
"has_description": task.description.is_some(),
})),
)
.await;
app_state
.track_analytics_event(
"task_attempt_started",
Some(serde_json::json!({
"task_id": task.id.to_string(),
"executor_type": executor_string.as_deref().unwrap_or("default"),
"attempt_id": attempt.id.to_string(),
})),
)
.await;
// Start execution asynchronously (don't block the response)
let app_state_clone = app_state.clone();
let attempt_id = attempt.id;
tokio::spawn(async move {
if let Err(e) = TaskAttempt::start_execution(
&app_state_clone.db_pool,
&app_state_clone,
attempt_id,
task_id,
project.id,
)
.await
{
tracing::error!(
"Failed to start execution for task attempt {}: {}",
attempt_id,
e
);
}
});
Ok(ResponseJson(ApiResponse::success(task)))
}
Err(e) => {
tracing::error!("Failed to create task attempt: {}", e);
Err(StatusCode::INTERNAL_SERVER_ERROR)
}
}
}
pub async fn update_task(
Extension(project): Extension<Project>,
Extension(existing_task): Extension<Task>,
State(app_state): State<AppState>,
Json(payload): Json<UpdateTask>,
) -> Result<ResponseJson<ApiResponse<Task>>, StatusCode> {
// Use existing values if not provided in update
let title = payload.title.unwrap_or(existing_task.title);
let description = payload.description.or(existing_task.description);
let status = payload.status.unwrap_or(existing_task.status);
let parent_task_attempt = payload
.parent_task_attempt
.or(existing_task.parent_task_attempt);
match Task::update(
&app_state.db_pool,
existing_task.id,
project.id,
title,
description,
status,
parent_task_attempt,
)
.await
{
Ok(task) => Ok(ResponseJson(ApiResponse::success(task))),
Err(e) => {
tracing::error!("Failed to update task: {}", e);
Err(StatusCode::INTERNAL_SERVER_ERROR)
}
}
}
pub async fn delete_task(
Extension(project): Extension<Project>,
Extension(task): Extension<Task>,
State(app_state): State<AppState>,
) -> Result<ResponseJson<ApiResponse<()>>, StatusCode> {
// Clean up all worktrees for this task before deletion
if let Err(e) = execution_monitor::cleanup_task_worktrees(&app_state.db_pool, task.id).await {
tracing::error!("Failed to cleanup worktrees for task {}: {}", task.id, e);
// Continue with deletion even if cleanup fails
}
// Clean up all executor sessions for this task before deletion
match TaskAttempt::find_by_task_id(&app_state.db_pool, task.id).await {
Ok(task_attempts) => {
for attempt in task_attempts {
if let Err(e) =
crate::models::executor_session::ExecutorSession::delete_by_task_attempt_id(
&app_state.db_pool,
attempt.id,
)
.await
{
tracing::error!(
"Failed to cleanup executor sessions for task attempt {}: {}",
attempt.id,
e
);
// Continue with deletion even if session cleanup fails
} else {
tracing::debug!(
"Cleaned up executor sessions for task attempt {}",
attempt.id
);
}
}
}
Err(e) => {
tracing::error!("Failed to get task attempts for session cleanup: {}", e);
// Continue with deletion even if we can't get task attempts
}
}
match Task::delete(&app_state.db_pool, task.id, project.id).await {
Ok(rows_affected) => {
if rows_affected == 0 {
Err(StatusCode::NOT_FOUND)
} else {
Ok(ResponseJson(ApiResponse::success(())))
}
}
Err(e) => {
tracing::error!("Failed to delete task: {}", e);
Err(StatusCode::INTERNAL_SERVER_ERROR)
}
}
}
pub fn tasks_project_router() -> Router<AppState> {
use axum::routing::post;
Router::new()
.route(
"/projects/:project_id/tasks",
get(get_project_tasks).post(create_task),
)
.route(
"/projects/:project_id/tasks/create-and-start",
post(create_task_and_start),
)
}
pub fn tasks_with_id_router() -> Router<AppState> {
Router::new().route(
"/projects/:project_id/tasks/:task_id",
get(get_task).put(update_task).delete(delete_task),
)
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,13 +0,0 @@
pub mod analytics;
pub mod git_service;
pub mod github_service;
pub mod notification_service;
pub mod pr_monitor;
pub mod process_service;
pub use analytics::{generate_user_id, AnalyticsConfig, AnalyticsService};
pub use git_service::{GitService, GitServiceError};
pub use github_service::{CreatePrRequest, GitHubRepoInfo, GitHubService, GitHubServiceError};
pub use notification_service::{NotificationConfig, NotificationService};
pub use pr_monitor::PrMonitorService;
pub use process_service::ProcessService;

View File

@@ -1,214 +0,0 @@
use std::{sync::Arc, time::Duration};
use sqlx::SqlitePool;
use tokio::{sync::RwLock, time::interval};
use tracing::{debug, error, info, warn};
use uuid::Uuid;
use crate::{
models::{
config::Config,
task::{Task, TaskStatus},
task_attempt::TaskAttempt,
},
services::{GitHubRepoInfo, GitHubService, GitService},
};
/// Service to monitor GitHub PRs and update task status when they are merged
pub struct PrMonitorService {
pool: SqlitePool,
poll_interval: Duration,
}
#[derive(Debug)]
pub struct PrInfo {
pub attempt_id: Uuid,
pub task_id: Uuid,
pub project_id: Uuid,
pub pr_number: i64,
pub repo_owner: String,
pub repo_name: String,
pub github_token: String,
}
impl PrMonitorService {
pub fn new(pool: SqlitePool) -> Self {
Self {
pool,
poll_interval: Duration::from_secs(60), // Check every minute
}
}
/// Start the PR monitoring service with config
pub async fn start_with_config(&self, config: Arc<RwLock<Config>>) {
info!(
"Starting PR monitoring service with interval {:?}",
self.poll_interval
);
let mut interval = interval(self.poll_interval);
loop {
interval.tick().await;
// Get GitHub token from config
let github_token = {
let config_read = config.read().await;
if config_read.github.pat.is_some() {
config_read.github.pat.clone()
} else {
config_read.github.token.clone()
}
};
match github_token {
Some(token) => {
if let Err(e) = self.check_all_open_prs_with_token(&token).await {
error!("Error checking PRs: {}", e);
}
}
None => {
debug!("No GitHub token configured, skipping PR monitoring");
}
}
}
}
/// Check all open PRs for updates with the provided GitHub token
async fn check_all_open_prs_with_token(
&self,
github_token: &str,
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let open_prs = self.get_open_prs_with_token(github_token).await?;
if open_prs.is_empty() {
debug!("No open PRs to check");
return Ok(());
}
info!("Checking {} open PRs", open_prs.len());
for pr_info in open_prs {
if let Err(e) = self.check_pr_status(&pr_info).await {
error!(
"Error checking PR #{} for attempt {}: {}",
pr_info.pr_number, pr_info.attempt_id, e
);
}
}
Ok(())
}
/// Get all task attempts with open PRs using the provided GitHub token
async fn get_open_prs_with_token(
&self,
github_token: &str,
) -> Result<Vec<PrInfo>, sqlx::Error> {
let rows = sqlx::query!(
r#"SELECT
ta.id as "attempt_id!: Uuid",
ta.task_id as "task_id!: Uuid",
ta.pr_number as "pr_number!: i64",
ta.pr_url,
t.project_id as "project_id!: Uuid",
p.git_repo_path
FROM task_attempts ta
JOIN tasks t ON ta.task_id = t.id
JOIN projects p ON t.project_id = p.id
WHERE ta.pr_status = 'open' AND ta.pr_number IS NOT NULL"#
)
.fetch_all(&self.pool)
.await?;
let mut pr_infos = Vec::new();
for row in rows {
// Get GitHub repo info from local git repository
match GitService::new(&row.git_repo_path) {
Ok(git_service) => match git_service.get_github_repo_info() {
Ok((owner, repo_name)) => {
pr_infos.push(PrInfo {
attempt_id: row.attempt_id,
task_id: row.task_id,
project_id: row.project_id,
pr_number: row.pr_number,
repo_owner: owner,
repo_name,
github_token: github_token.to_string(),
});
}
Err(e) => {
warn!(
"Could not extract repo info from git path {}: {}",
row.git_repo_path, e
);
}
},
Err(e) => {
warn!(
"Could not create git service for path {}: {}",
row.git_repo_path, e
);
}
}
}
Ok(pr_infos)
}
/// Check the status of a specific PR
async fn check_pr_status(
&self,
pr_info: &PrInfo,
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let github_service = GitHubService::new(&pr_info.github_token)?;
let repo_info = GitHubRepoInfo {
owner: pr_info.repo_owner.clone(),
repo_name: pr_info.repo_name.clone(),
};
let pr_status = github_service
.update_pr_status(&repo_info, pr_info.pr_number)
.await?;
debug!(
"PR #{} status: {} (was open)",
pr_info.pr_number, pr_status.status
);
// Update the PR status in the database
if pr_status.status != "open" {
// Extract merge commit SHA if the PR was merged
let merge_commit_sha = pr_status.merge_commit_sha.as_deref();
TaskAttempt::update_pr_status(
&self.pool,
pr_info.attempt_id,
&pr_status.status,
pr_status.merged_at,
merge_commit_sha,
)
.await?;
// If the PR was merged, update the task status to done
if pr_status.merged {
info!(
"PR #{} was merged, updating task {} to done",
pr_info.pr_number, pr_info.task_id
);
Task::update_status(
&self.pool,
pr_info.task_id,
pr_info.project_id,
TaskStatus::Done,
)
.await?;
}
}
Ok(())
}
}

View File

@@ -1,944 +0,0 @@
use std::str::FromStr;
use sqlx::SqlitePool;
use tracing::{debug, info};
use uuid::Uuid;
use crate::{
command_runner,
executor::Executor,
models::{
execution_process::{CreateExecutionProcess, ExecutionProcess, ExecutionProcessType},
executor_session::{CreateExecutorSession, ExecutorSession},
project::Project,
task::Task,
task_attempt::{TaskAttempt, TaskAttemptError},
},
utils::shell::get_shell_command,
};
/// Service responsible for managing process execution lifecycle
pub struct ProcessService;
impl ProcessService {
/// Run cleanup script if project has one configured
pub async fn run_cleanup_script_if_configured(
pool: &SqlitePool,
app_state: &crate::app_state::AppState,
attempt_id: Uuid,
task_id: Uuid,
project_id: Uuid,
) -> Result<(), TaskAttemptError> {
// Get project to check if cleanup script exists
let project = Project::find_by_id(pool, project_id)
.await?
.ok_or(TaskAttemptError::ProjectNotFound)?;
if Self::should_run_cleanup_script(&project) {
// Get worktree path
let task_attempt = TaskAttempt::find_by_id(pool, attempt_id).await?.ok_or(
TaskAttemptError::ValidationError("Task attempt not found".to_string()),
)?;
tracing::info!(
"Running cleanup script for project {} in attempt {}",
project_id,
attempt_id
);
Self::start_cleanup_script(
pool,
app_state,
attempt_id,
task_id,
&project,
&task_attempt.worktree_path,
)
.await?;
} else {
tracing::debug!("No cleanup script configured for project {}", project_id);
}
Ok(())
}
/// Automatically run setup if needed, then continue with the specified operation
pub async fn auto_setup_and_execute(
pool: &SqlitePool,
app_state: &crate::app_state::AppState,
attempt_id: Uuid,
task_id: Uuid,
project_id: Uuid,
operation: &str, // "dev_server", "coding_agent", or "followup"
operation_params: Option<serde_json::Value>,
) -> Result<(), TaskAttemptError> {
// Check if setup is completed for this worktree
let setup_completed = TaskAttempt::is_setup_completed(pool, attempt_id).await?;
// Get project to check if setup script exists
let project = Project::find_by_id(pool, project_id)
.await?
.ok_or(TaskAttemptError::ProjectNotFound)?;
let needs_setup = Self::should_run_setup_script(&project) && !setup_completed;
if needs_setup {
// Run setup with delegation to the original operation
Self::execute_setup_with_delegation(
pool,
app_state,
attempt_id,
task_id,
project_id,
operation,
operation_params,
)
.await
} else {
// Setup not needed or already completed, continue with original operation
match operation {
"dev_server" => {
Self::start_dev_server_direct(pool, app_state, attempt_id, task_id, project_id)
.await
}
"coding_agent" => {
Self::start_coding_agent(pool, app_state, attempt_id, task_id, project_id).await
}
"followup" => {
let prompt = operation_params
.as_ref()
.and_then(|p| p.get("prompt"))
.and_then(|p| p.as_str())
.unwrap_or("");
Self::start_followup_execution_direct(
pool, app_state, attempt_id, task_id, project_id, prompt,
)
.await
.map(|_| ())
}
_ => Err(TaskAttemptError::ValidationError(format!(
"Unknown operation: {}",
operation
))),
}
}
}
/// Execute setup script with delegation context for continuing after completion
async fn execute_setup_with_delegation(
pool: &SqlitePool,
app_state: &crate::app_state::AppState,
attempt_id: Uuid,
task_id: Uuid,
project_id: Uuid,
delegate_to: &str,
operation_params: Option<serde_json::Value>,
) -> Result<(), TaskAttemptError> {
let (task_attempt, project) =
Self::load_execution_context(pool, attempt_id, project_id).await?;
// Create delegation context for execution monitor
let delegation_context = serde_json::json!({
"delegate_to": delegate_to,
"operation_params": {
"task_id": task_id,
"project_id": project_id,
"attempt_id": attempt_id,
"additional": operation_params
}
});
// Create modified setup script execution with delegation context in args
let setup_script = project.setup_script.as_ref().unwrap();
let process_id = Uuid::new_v4();
// Create execution process record with delegation context
let _execution_process = Self::create_execution_process_record_with_delegation(
pool,
attempt_id,
process_id,
setup_script,
&task_attempt.worktree_path,
delegation_context,
)
.await?;
// Setup script starting with delegation
tracing::info!(
"Starting setup script with delegation to {} for task attempt {}",
delegate_to,
attempt_id
);
// Execute the setup script
let child = Self::execute_setup_script_process(
setup_script,
pool,
task_id,
attempt_id,
process_id,
&task_attempt.worktree_path,
)
.await?;
// Register for monitoring
Self::register_for_monitoring(
app_state,
process_id,
attempt_id,
&ExecutionProcessType::SetupScript,
child,
)
.await;
tracing::info!(
"Started setup execution with delegation {} for task attempt {}",
process_id,
attempt_id
);
Ok(())
}
/// Start the execution flow for a task attempt (setup script + executor)
pub async fn start_execution(
pool: &SqlitePool,
app_state: &crate::app_state::AppState,
attempt_id: Uuid,
task_id: Uuid,
project_id: Uuid,
) -> Result<(), TaskAttemptError> {
use crate::models::task::{Task, TaskStatus};
// Load required entities
let (task_attempt, project) =
Self::load_execution_context(pool, attempt_id, project_id).await?;
// Update task status to indicate execution has started
Task::update_status(pool, task_id, project_id, TaskStatus::InProgress).await?;
// Determine execution sequence based on project configuration
if Self::should_run_setup_script(&project) {
Self::start_setup_script(
pool,
app_state,
attempt_id,
task_id,
&project,
&task_attempt.worktree_path,
)
.await
} else {
Self::start_coding_agent(pool, app_state, attempt_id, task_id, project_id).await
}
}
/// Start the coding agent after setup is complete or if no setup is needed
pub async fn start_coding_agent(
pool: &SqlitePool,
app_state: &crate::app_state::AppState,
attempt_id: Uuid,
task_id: Uuid,
_project_id: Uuid,
) -> Result<(), TaskAttemptError> {
let task_attempt = TaskAttempt::find_by_id(pool, attempt_id)
.await?
.ok_or(TaskAttemptError::TaskNotFound)?;
let executor_config = Self::resolve_executor_config(&task_attempt.executor);
Self::start_process_execution(
pool,
app_state,
attempt_id,
task_id,
crate::executor::ExecutorType::CodingAgent {
config: executor_config,
follow_up: None,
},
"Starting executor".to_string(),
ExecutionProcessType::CodingAgent,
&task_attempt.worktree_path,
)
.await
}
/// Start a dev server for this task attempt (with automatic setup)
pub async fn start_dev_server(
pool: &SqlitePool,
app_state: &crate::app_state::AppState,
attempt_id: Uuid,
task_id: Uuid,
project_id: Uuid,
) -> Result<(), TaskAttemptError> {
// Ensure worktree exists (recreate if needed for cold task support)
let _worktree_path =
TaskAttempt::ensure_worktree_exists(pool, attempt_id, project_id, "dev server").await?;
// Use automatic setup logic
Self::auto_setup_and_execute(
pool,
app_state,
attempt_id,
task_id,
project_id,
"dev_server",
None,
)
.await
}
/// Start a dev server directly without setup check (internal method)
pub async fn start_dev_server_direct(
pool: &SqlitePool,
app_state: &crate::app_state::AppState,
attempt_id: Uuid,
task_id: Uuid,
project_id: Uuid,
) -> Result<(), TaskAttemptError> {
// Ensure worktree exists (recreate if needed for cold task support)
let worktree_path =
TaskAttempt::ensure_worktree_exists(pool, attempt_id, project_id, "dev server").await?;
// Get the project to access the dev_script
let project = Project::find_by_id(pool, project_id)
.await?
.ok_or(TaskAttemptError::TaskNotFound)?;
let dev_script = project.dev_script.ok_or_else(|| {
TaskAttemptError::ValidationError(
"No dev script configured for this project".to_string(),
)
})?;
if dev_script.trim().is_empty() {
return Err(TaskAttemptError::ValidationError(
"Dev script is empty".to_string(),
));
}
let result = Self::start_process_execution(
pool,
app_state,
attempt_id,
task_id,
crate::executor::ExecutorType::DevServer(dev_script),
"Starting dev server".to_string(),
ExecutionProcessType::DevServer,
&worktree_path,
)
.await;
if result.is_ok() {
app_state
.track_analytics_event(
"dev_server_started",
Some(serde_json::json!({
"task_id": task_id.to_string(),
"project_id": project_id.to_string(),
"attempt_id": attempt_id.to_string()
})),
)
.await;
}
result
}
/// Start a follow-up execution using the same executor type as the first process (with automatic setup)
/// Returns the attempt_id that was actually used (always the original attempt_id for session continuity)
pub async fn start_followup_execution(
pool: &SqlitePool,
app_state: &crate::app_state::AppState,
attempt_id: Uuid,
task_id: Uuid,
project_id: Uuid,
prompt: &str,
) -> Result<Uuid, TaskAttemptError> {
use crate::models::task::{Task, TaskStatus};
// Get the current task attempt to check if worktree is deleted
let current_attempt = TaskAttempt::find_by_id(pool, attempt_id)
.await?
.ok_or(TaskAttemptError::TaskNotFound)?;
let actual_attempt_id = attempt_id;
if current_attempt.worktree_deleted {
info!(
"Resurrecting deleted attempt {} (branch: {}) for followup execution - maintaining session continuity",
attempt_id, current_attempt.branch
);
} else {
info!(
"Continuing followup execution on active attempt {} (branch: {})",
attempt_id, current_attempt.branch
);
}
// Update task status to indicate follow-up execution has started
Task::update_status(pool, task_id, project_id, TaskStatus::InProgress).await?;
// Ensure worktree exists (recreate if needed for cold task support)
// This will resurrect the worktree at the exact same path for session continuity
let _worktree_path =
TaskAttempt::ensure_worktree_exists(pool, actual_attempt_id, project_id, "followup")
.await?;
// Use automatic setup logic with followup parameters
let operation_params = serde_json::json!({
"prompt": prompt
});
Self::auto_setup_and_execute(
pool,
app_state,
attempt_id,
task_id,
project_id,
"followup",
Some(operation_params),
)
.await?;
Ok(actual_attempt_id)
}
/// Start a follow-up execution directly without setup check (internal method)
pub async fn start_followup_execution_direct(
pool: &SqlitePool,
app_state: &crate::app_state::AppState,
attempt_id: Uuid,
task_id: Uuid,
project_id: Uuid,
prompt: &str,
) -> Result<Uuid, TaskAttemptError> {
// Ensure worktree exists (recreate if needed for cold task support)
// This will resurrect the worktree at the exact same path for session continuity
let worktree_path =
TaskAttempt::ensure_worktree_exists(pool, attempt_id, project_id, "followup").await?;
// Find the most recent coding agent execution process to get the executor type
// Look up processes from the ORIGINAL attempt to find the session
let execution_processes =
ExecutionProcess::find_by_task_attempt_id(pool, attempt_id).await?;
let most_recent_coding_agent = execution_processes
.iter()
.rev() // Reverse to get most recent first (since they're ordered by created_at ASC)
.find(|p| matches!(p.process_type, ExecutionProcessType::CodingAgent))
.ok_or_else(|| {
tracing::error!(
"No previous coding agent execution found for task attempt {}. Found {} processes: {:?}",
attempt_id,
execution_processes.len(),
execution_processes.iter().map(|p| format!("{:?}", p.process_type)).collect::<Vec<_>>()
);
TaskAttemptError::ValidationError("No previous coding agent execution found for follow-up".to_string())
})?;
// Get the executor session to find the session ID
// This looks up the session from the original attempt's processes
let executor_session =
ExecutorSession::find_by_execution_process_id(pool, most_recent_coding_agent.id)
.await?
.ok_or_else(|| {
tracing::error!(
"No executor session found for execution process {} (task attempt {})",
most_recent_coding_agent.id,
attempt_id
);
TaskAttemptError::ValidationError(
"No executor session found for follow-up".to_string(),
)
})?;
let executor_config: crate::executor::ExecutorConfig = match most_recent_coding_agent
.executor_type
.as_deref()
{
Some(executor_str) => executor_str.parse().unwrap(),
_ => {
tracing::error!(
"Invalid or missing executor type '{}' for execution process {} (task attempt {})",
most_recent_coding_agent.executor_type.as_deref().unwrap_or("None"),
most_recent_coding_agent.id,
attempt_id
);
return Err(TaskAttemptError::ValidationError(format!(
"Invalid executor type for follow-up: {}",
most_recent_coding_agent
.executor_type
.as_deref()
.unwrap_or("None")
)));
}
};
// Try to use follow-up with session ID, but fall back to new session if it fails
let followup_executor = if let Some(session_id) = &executor_session.session_id {
// First try with session ID for continuation
debug!(
"SESSION_FOLLOWUP: Attempting follow-up execution with session ID: {} (attempt: {}, worktree: {})",
session_id, attempt_id, worktree_path
);
crate::executor::ExecutorType::CodingAgent {
config: executor_config.clone(),
follow_up: Some(crate::executor::FollowUpInfo {
session_id: session_id.clone(),
prompt: prompt.to_string(),
}),
}
} else {
// No session ID available, start new session
tracing::warn!(
"SESSION_FOLLOWUP: No session ID available for follow-up execution on attempt {}, starting new session (worktree: {})",
attempt_id, worktree_path
);
crate::executor::ExecutorType::CodingAgent {
config: executor_config.clone(),
follow_up: None,
}
};
// Try to start the follow-up execution
let execution_result = Self::start_process_execution(
pool,
app_state,
attempt_id,
task_id,
followup_executor,
"Starting follow-up executor".to_string(),
ExecutionProcessType::CodingAgent,
&worktree_path,
)
.await;
// If follow-up execution failed and we tried to use a session ID,
// fall back to a new session
if execution_result.is_err() && executor_session.session_id.is_some() {
tracing::warn!(
"SESSION_FOLLOWUP: Follow-up execution with session ID '{}' failed for attempt {}, falling back to new session. Error: {:?}",
executor_session.session_id.as_ref().unwrap(),
attempt_id,
execution_result.as_ref().err()
);
// Create a new session instead of trying to resume
let new_session_executor = crate::executor::ExecutorType::CodingAgent {
config: executor_config,
follow_up: None,
};
Self::start_process_execution(
pool,
app_state,
attempt_id,
task_id,
new_session_executor,
"Starting new executor session (follow-up session failed)".to_string(),
ExecutionProcessType::CodingAgent,
&worktree_path,
)
.await?;
} else {
// Either it succeeded or we already tried without session ID
execution_result?;
}
Ok(attempt_id)
}
/// Unified function to start any type of process execution
#[allow(clippy::too_many_arguments)]
pub async fn start_process_execution(
pool: &SqlitePool,
app_state: &crate::app_state::AppState,
attempt_id: Uuid,
task_id: Uuid,
executor_type: crate::executor::ExecutorType,
activity_note: String,
process_type: ExecutionProcessType,
worktree_path: &str,
) -> Result<(), TaskAttemptError> {
let process_id = Uuid::new_v4();
// Create execution process record
let _execution_process = Self::create_execution_process_record(
pool,
attempt_id,
process_id,
&executor_type,
process_type.clone(),
worktree_path,
)
.await?;
// Create executor session for coding agents
if matches!(process_type, ExecutionProcessType::CodingAgent) {
// Extract follow-up prompt if this is a follow-up execution
let followup_prompt = match &executor_type {
crate::executor::ExecutorType::CodingAgent {
follow_up: Some(ref info),
..
} => Some(info.prompt.clone()),
_ => None,
};
Self::create_executor_session_record(
pool,
attempt_id,
task_id,
process_id,
followup_prompt,
)
.await?;
}
// Process started successfully
tracing::info!("Starting {} for task attempt {}", activity_note, attempt_id);
// Execute the process
let child = Self::execute_process(
&executor_type,
pool,
task_id,
attempt_id,
process_id,
worktree_path,
)
.await?;
// Register for monitoring
Self::register_for_monitoring(app_state, process_id, attempt_id, &process_type, child)
.await;
tracing::info!(
"Started execution {} for task attempt {}",
process_id,
attempt_id
);
Ok(())
}
/// Load the execution context (task attempt and project) with validation
async fn load_execution_context(
pool: &SqlitePool,
attempt_id: Uuid,
project_id: Uuid,
) -> Result<(TaskAttempt, Project), TaskAttemptError> {
let task_attempt = TaskAttempt::find_by_id(pool, attempt_id)
.await?
.ok_or(TaskAttemptError::TaskNotFound)?;
let project = Project::find_by_id(pool, project_id)
.await?
.ok_or(TaskAttemptError::ProjectNotFound)?;
Ok((task_attempt, project))
}
/// Check if setup script should be executed
fn should_run_setup_script(project: &Project) -> bool {
project
.setup_script
.as_ref()
.map(|script| !script.trim().is_empty())
.unwrap_or(false)
}
fn should_run_cleanup_script(project: &Project) -> bool {
project
.cleanup_script
.as_ref()
.map(|script| !script.trim().is_empty())
.unwrap_or(false)
}
/// Start the setup script execution
async fn start_setup_script(
pool: &SqlitePool,
app_state: &crate::app_state::AppState,
attempt_id: Uuid,
task_id: Uuid,
project: &Project,
worktree_path: &str,
) -> Result<(), TaskAttemptError> {
let setup_script = project.setup_script.as_ref().unwrap();
Self::start_process_execution(
pool,
app_state,
attempt_id,
task_id,
crate::executor::ExecutorType::SetupScript(setup_script.clone()),
"Starting setup script".to_string(),
ExecutionProcessType::SetupScript,
worktree_path,
)
.await
}
/// Start the cleanup script execution
async fn start_cleanup_script(
pool: &SqlitePool,
app_state: &crate::app_state::AppState,
attempt_id: Uuid,
task_id: Uuid,
project: &Project,
worktree_path: &str,
) -> Result<(), TaskAttemptError> {
let cleanup_script = project.cleanup_script.as_ref().unwrap();
Self::start_process_execution(
pool,
app_state,
attempt_id,
task_id,
crate::executor::ExecutorType::CleanupScript(cleanup_script.clone()),
"Starting cleanup script".to_string(),
ExecutionProcessType::CleanupScript,
worktree_path,
)
.await
}
/// Resolve executor configuration from string name
fn resolve_executor_config(executor_name: &Option<String>) -> crate::executor::ExecutorConfig {
if let Some(name) = executor_name {
crate::executor::ExecutorConfig::from_str(name).unwrap_or_else(|_| {
tracing::warn!(
"Unknown executor type '{}', defaulting to EchoExecutor",
name
);
crate::executor::ExecutorConfig::Echo
})
} else {
tracing::warn!("No executor type specified, defaulting to EchoExecutor");
crate::executor::ExecutorConfig::Echo
}
}
/// Create execution process database record
async fn create_execution_process_record(
pool: &SqlitePool,
attempt_id: Uuid,
process_id: Uuid,
executor_type: &crate::executor::ExecutorType,
process_type: ExecutionProcessType,
worktree_path: &str,
) -> Result<ExecutionProcess, TaskAttemptError> {
let (shell_cmd, shell_arg) = get_shell_command();
let (command, args, executor_type_string) = match executor_type {
crate::executor::ExecutorType::SetupScript(_) => (
shell_cmd.to_string(),
Some(serde_json::to_string(&[shell_arg, "setup-script"]).unwrap()),
Some("setup-script".to_string()),
),
crate::executor::ExecutorType::CleanupScript(_) => (
shell_cmd.to_string(),
Some(serde_json::to_string(&[shell_arg, "cleanup-script"]).unwrap()),
Some("cleanup-script".to_string()),
),
crate::executor::ExecutorType::DevServer(_) => (
shell_cmd.to_string(),
Some(serde_json::to_string(&[shell_arg, "dev_server"]).unwrap()),
None, // Dev servers don't have an executor type
),
crate::executor::ExecutorType::CodingAgent { config, follow_up } => {
let command = if follow_up.is_some() {
"followup_executor".to_string()
} else {
"executor".to_string()
};
(command, None, Some(format!("{}", config)))
}
};
let create_process = CreateExecutionProcess {
task_attempt_id: attempt_id,
process_type,
executor_type: executor_type_string,
command,
args,
working_directory: worktree_path.to_string(),
};
ExecutionProcess::create(pool, &create_process, process_id)
.await
.map_err(TaskAttemptError::from)
}
/// Create executor session record for coding agents
async fn create_executor_session_record(
pool: &SqlitePool,
attempt_id: Uuid,
task_id: Uuid,
process_id: Uuid,
followup_prompt: Option<String>,
) -> Result<(), TaskAttemptError> {
// Use follow-up prompt if provided, otherwise get the task to create prompt
let prompt = if let Some(followup_prompt) = followup_prompt {
followup_prompt
} else {
let task = Task::find_by_id(pool, task_id)
.await?
.ok_or(TaskAttemptError::TaskNotFound)?;
format!("{}\n\n{}", task.title, task.description.unwrap_or_default())
};
let session_id = Uuid::new_v4();
let create_session = CreateExecutorSession {
task_attempt_id: attempt_id,
execution_process_id: process_id,
prompt: Some(prompt),
};
ExecutorSession::create(pool, &create_session, session_id)
.await
.map(|_| ())
.map_err(TaskAttemptError::from)
}
/// Execute the process based on type
async fn execute_process(
executor_type: &crate::executor::ExecutorType,
pool: &SqlitePool,
task_id: Uuid,
attempt_id: Uuid,
process_id: Uuid,
worktree_path: &str,
) -> Result<command_runner::CommandProcess, TaskAttemptError> {
use crate::executors::{CleanupScriptExecutor, DevServerExecutor, SetupScriptExecutor};
let result = match executor_type {
crate::executor::ExecutorType::SetupScript(script) => {
let executor = SetupScriptExecutor {
script: script.clone(),
};
executor
.execute_streaming(pool, task_id, attempt_id, process_id, worktree_path)
.await
}
crate::executor::ExecutorType::CleanupScript(script) => {
let executor = CleanupScriptExecutor {
script: script.clone(),
};
executor
.execute_streaming(pool, task_id, attempt_id, process_id, worktree_path)
.await
}
crate::executor::ExecutorType::DevServer(script) => {
let executor = DevServerExecutor {
script: script.clone(),
};
executor
.execute_streaming(pool, task_id, attempt_id, process_id, worktree_path)
.await
}
crate::executor::ExecutorType::CodingAgent { config, follow_up } => {
let executor = config.create_executor();
if let Some(ref follow_up_info) = follow_up {
executor
.execute_followup_streaming(
pool,
task_id,
attempt_id,
process_id,
&follow_up_info.session_id,
&follow_up_info.prompt,
worktree_path,
)
.await
} else {
executor
.execute_streaming(pool, task_id, attempt_id, process_id, worktree_path)
.await
}
}
};
result.map_err(|e| TaskAttemptError::Git(git2::Error::from_str(&e.to_string())))
}
/// Register process for monitoring
async fn register_for_monitoring(
app_state: &crate::app_state::AppState,
process_id: Uuid,
attempt_id: Uuid,
process_type: &ExecutionProcessType,
child: command_runner::CommandProcess,
) {
let execution_type = match process_type {
ExecutionProcessType::SetupScript => crate::app_state::ExecutionType::SetupScript,
ExecutionProcessType::CleanupScript => crate::app_state::ExecutionType::CleanupScript,
ExecutionProcessType::CodingAgent => crate::app_state::ExecutionType::CodingAgent,
ExecutionProcessType::DevServer => crate::app_state::ExecutionType::DevServer,
};
app_state
.add_running_execution(
process_id,
crate::app_state::RunningExecution {
task_attempt_id: attempt_id,
_execution_type: execution_type,
child,
},
)
.await;
}
/// Create execution process database record with delegation context
async fn create_execution_process_record_with_delegation(
pool: &SqlitePool,
attempt_id: Uuid,
process_id: Uuid,
_setup_script: &str,
worktree_path: &str,
delegation_context: serde_json::Value,
) -> Result<ExecutionProcess, TaskAttemptError> {
let (shell_cmd, shell_arg) = get_shell_command();
// Store delegation context in args for execution monitor to read
let args_with_delegation = serde_json::json!([
shell_arg,
"setup-script",
"--delegation-context",
delegation_context.to_string()
]);
let create_process = CreateExecutionProcess {
task_attempt_id: attempt_id,
process_type: ExecutionProcessType::SetupScript,
executor_type: Some("setup-script".to_string()),
command: shell_cmd.to_string(),
args: Some(args_with_delegation.to_string()),
working_directory: worktree_path.to_string(),
};
ExecutionProcess::create(pool, &create_process, process_id)
.await
.map_err(TaskAttemptError::from)
}
/// Execute setup script process specifically
async fn execute_setup_script_process(
setup_script: &str,
pool: &SqlitePool,
task_id: Uuid,
attempt_id: Uuid,
process_id: Uuid,
worktree_path: &str,
) -> Result<command_runner::CommandProcess, TaskAttemptError> {
use crate::executors::SetupScriptExecutor;
let executor = SetupScriptExecutor {
script: setup_script.to_string(),
};
executor
.execute_streaming(pool, task_id, attempt_id, process_id, worktree_path)
.await
.map_err(|e| TaskAttemptError::Git(git2::Error::from_str(&e.to_string())))
}
}

View File

@@ -10,24 +10,24 @@ echo "🔨 Building frontend..."
(cd frontend && npm run build)
echo "🔨 Building Rust binaries..."
cargo build --release --manifest-path backend/Cargo.toml
cargo build --release --bin mcp_task_server --manifest-path backend/Cargo.toml
cargo build --release --manifest-path Cargo.toml
# cargo build --release --bin mcp_task_server --manifest-path Cargo.toml
echo "📦 Creating distribution package..."
# Copy the main binary
cp target/release/vibe-kanban vibe-kanban
cp target/release/mcp_task_server vibe-kanban-mcp
cp target/release/server vibe-kanban
# cp target/release/mcp_task_server vibe-kanban-mcp
zip vibe-kanban.zip vibe-kanban
zip vibe-kanban-mcp.zip vibe-kanban-mcp
# zip vibe-kanban-mcp.zip vibe-kanban-mcp
rm vibe-kanban vibe-kanban-mcp
rm vibe-kanban #vibe-kanban-mcp
mv vibe-kanban.zip npx-cli/dist/macos-arm64/vibe-kanban.zip
mv vibe-kanban-mcp.zip npx-cli/dist/macos-arm64/vibe-kanban-mcp.zip
# mv vibe-kanban-mcp.zip npx-cli/dist/macos-arm64/vibe-kanban-mcp.zip
echo "✅ NPM package ready!"
echo "📁 Files created:"
echo " - npx-cli/dist/macos-arm64/vibe-kanban.zip"
echo " - npx-cli/dist/macos-arm64/vibe-kanban-mcp.zip"
# echo " - npx-cli/dist/macos-arm64/vibe-kanban-mcp.zip"

23
check-both.sh Executable file
View File

@@ -0,0 +1,23 @@
#!/usr/bin/env bash
# ─ load up your Rust/Cargo from ~/.cargo/env ─
if [ -f "$HOME/.cargo/env" ]; then
# this is where `cargo` typically lives
source "$HOME/.cargo/env"
fi
# now run both checks
cargo check --workspace --message-format=json "$@"
cargo check --workspace --message-format=json --features cloud "$@"
# Add this to .vscode/settings.json to lint both cloud and non-cloud
# {
# // rust-analyzer will still do its usual codelens, inlay, etc. based
# // on whatever "cargo.features" you pick here (can be [] for no-features,
# // or ["foo"] for a specific feature).
# "rust-analyzer.cargo.features": "all",
# // overrideCommand must emit JSON diagnostics. We're just calling our
# // script which in turn calls cargo twice.
# "rust-analyzer.check.overrideCommand": [
# "${workspaceFolder}/check-both.sh"
# ]
# }

View File

@@ -0,0 +1,12 @@
{
"db_name": "SQLite",
"query": "UPDATE tasks SET status = $2, updated_at = CURRENT_TIMESTAMP WHERE id = $1",
"describe": {
"columns": [],
"parameters": {
"Right": 2
},
"nullable": []
},
"hash": "0bf539bafb9c27cb352b0e08722c59a1cca3b6073517c982e5c08f62bc3ef4e4"
}

View File

@@ -1,6 +1,6 @@
{
"db_name": "SQLite",
"query": "INSERT INTO task_attempts (id, task_id, worktree_path, branch, base_branch, merge_commit, executor, pr_url, pr_number, pr_status, pr_merged_at, worktree_deleted, setup_completed_at)\n VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13)\n RETURNING id as \"id!: Uuid\", task_id as \"task_id!: Uuid\", worktree_path, branch, base_branch, merge_commit, executor, pr_url, pr_number, pr_status, pr_merged_at as \"pr_merged_at: DateTime<Utc>\", worktree_deleted as \"worktree_deleted!: bool\", setup_completed_at as \"setup_completed_at: DateTime<Utc>\", created_at as \"created_at!: DateTime<Utc>\", updated_at as \"updated_at!: DateTime<Utc>\"",
"query": "INSERT INTO task_attempts (id, task_id, container_ref, branch, base_branch, merge_commit, base_coding_agent, pr_url, pr_number, pr_status, pr_merged_at, worktree_deleted, setup_completed_at)\n VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13)\n RETURNING id as \"id!: Uuid\", task_id as \"task_id!: Uuid\", container_ref, branch, base_branch, merge_commit, base_coding_agent as \"base_coding_agent!\", pr_url, pr_number, pr_status, pr_merged_at as \"pr_merged_at: DateTime<Utc>\", worktree_deleted as \"worktree_deleted!: bool\", setup_completed_at as \"setup_completed_at: DateTime<Utc>\", created_at as \"created_at!: DateTime<Utc>\", updated_at as \"updated_at!: DateTime<Utc>\"",
"describe": {
"columns": [
{
@@ -14,7 +14,7 @@
"type_info": "Blob"
},
{
"name": "worktree_path",
"name": "container_ref",
"ordinal": 2,
"type_info": "Text"
},
@@ -34,7 +34,7 @@
"type_info": "Text"
},
{
"name": "executor",
"name": "base_coding_agent!",
"ordinal": 6,
"type_info": "Text"
},
@@ -85,8 +85,8 @@
"nullable": [
true,
false,
false,
false,
true,
true,
false,
true,
true,
@@ -100,5 +100,5 @@
false
]
},
"hash": "6e8b860b14decfc2227dc57213f38442943d3fbef5c8418fd6b634c6e0f5e2ea"
"hash": "1174eecd9f26565a4f4e1e367b5d7c90b4d19b793e496c2e01593f32c5101f24"
}

View File

@@ -0,0 +1,12 @@
{
"db_name": "SQLite",
"query": "UPDATE task_attempts SET container_ref = $1, updated_at = $2 WHERE id = $3",
"describe": {
"columns": [],
"parameters": {
"Right": 3
},
"nullable": []
},
"hash": "129f898c089030e5ce8c41ff43fd28f213b1c78fc2cf97698da877ff91d6c086"
}

View File

@@ -0,0 +1,12 @@
{
"db_name": "SQLite",
"query": "DELETE FROM tasks WHERE id = $1",
"describe": {
"columns": [],
"parameters": {
"Right": 1
},
"nullable": []
},
"hash": "1e339e959f8d2cdac13b3e2b452d2f718c0fd6cf6202d5c9139fb1afda123d29"
}

View File

@@ -1,6 +1,6 @@
{
"db_name": "SQLite",
"query": "SELECT id AS \"id!: Uuid\",\n task_id AS \"task_id!: Uuid\",\n worktree_path,\n branch,\n base_branch,\n merge_commit,\n executor,\n pr_url,\n pr_number,\n pr_status,\n pr_merged_at AS \"pr_merged_at: DateTime<Utc>\",\n worktree_deleted AS \"worktree_deleted!: bool\",\n setup_completed_at AS \"setup_completed_at: DateTime<Utc>\",\n created_at AS \"created_at!: DateTime<Utc>\",\n updated_at AS \"updated_at!: DateTime<Utc>\"\n FROM task_attempts\n WHERE task_id = $1\n ORDER BY created_at DESC",
"query": "SELECT id AS \"id!: Uuid\",\n task_id AS \"task_id!: Uuid\",\n container_ref,\n branch,\n merge_commit,\n base_branch,\n base_coding_agent AS \"base_coding_agent!\",\n pr_url,\n pr_number,\n pr_status,\n pr_merged_at AS \"pr_merged_at: DateTime<Utc>\",\n worktree_deleted AS \"worktree_deleted!: bool\",\n setup_completed_at AS \"setup_completed_at: DateTime<Utc>\",\n created_at AS \"created_at!: DateTime<Utc>\",\n updated_at AS \"updated_at!: DateTime<Utc>\"\n FROM task_attempts\n WHERE rowid = $1",
"describe": {
"columns": [
{
@@ -14,7 +14,7 @@
"type_info": "Blob"
},
{
"name": "worktree_path",
"name": "container_ref",
"ordinal": 2,
"type_info": "Text"
},
@@ -24,17 +24,17 @@
"type_info": "Text"
},
{
"name": "base_branch",
"name": "merge_commit",
"ordinal": 4,
"type_info": "Text"
},
{
"name": "merge_commit",
"name": "base_branch",
"ordinal": 5,
"type_info": "Text"
},
{
"name": "executor",
"name": "base_coding_agent!",
"ordinal": 6,
"type_info": "Text"
},
@@ -85,12 +85,12 @@
"nullable": [
true,
false,
false,
false,
false,
true,
true,
true,
false,
true,
true,
true,
true,
true,
@@ -100,5 +100,5 @@
false
]
},
"hash": "a9e93d5b09b29faf66e387e4d7596a792d81e75c4d3726e83c2963e8d7c9b56f"
"hash": "1f1850b240af8edf2a05ad4a250c78331f69f3637f4b8a554898b9e6ba5bba37"
}

View File

@@ -1,6 +1,6 @@
{
"db_name": "SQLite",
"query": "\n SELECT ta.id as \"attempt_id!: Uuid\", ta.worktree_path, p.git_repo_path as \"git_repo_path!\"\n FROM task_attempts ta\n JOIN tasks t ON ta.task_id = t.id\n JOIN projects p ON t.project_id = p.id\n WHERE ta.task_id = $1\n ",
"query": "\n SELECT ta.id as \"attempt_id!: Uuid\", ta.container_ref, p.git_repo_path as \"git_repo_path!\"\n FROM task_attempts ta\n JOIN tasks t ON ta.task_id = t.id\n JOIN projects p ON t.project_id = p.id\n WHERE ta.task_id = $1\n ",
"describe": {
"columns": [
{
@@ -9,7 +9,7 @@
"type_info": "Blob"
},
{
"name": "worktree_path",
"name": "container_ref",
"ordinal": 1,
"type_info": "Text"
},
@@ -24,9 +24,9 @@
},
"nullable": [
true,
false,
true,
false
]
},
"hash": "4049ca413b285a05aca6b25385e9c8185575f01e9069e4e8581aa45d713f612f"
"hash": "216193a63f7b0fb788566b63f56d83ee3d344a5c85e1a5999247b6a44f3ae390"
}

View File

@@ -0,0 +1,38 @@
{
"db_name": "SQLite",
"query": "SELECT \n execution_id as \"execution_id!: Uuid\",\n logs,\n byte_size,\n inserted_at as \"inserted_at!: DateTime<Utc>\"\n FROM execution_process_logs \n WHERE execution_id = $1",
"describe": {
"columns": [
{
"name": "execution_id!: Uuid",
"ordinal": 0,
"type_info": "Blob"
},
{
"name": "logs",
"ordinal": 1,
"type_info": "Text"
},
{
"name": "byte_size",
"ordinal": 2,
"type_info": "Integer"
},
{
"name": "inserted_at!: DateTime<Utc>",
"ordinal": 3,
"type_info": "Text"
}
],
"parameters": {
"Right": 1
},
"nullable": [
true,
false,
false,
false
]
},
"hash": "2ec7648202fc6f496b97d9486cf9fd3c59fdba73c168628784f0a09488b80528"
}

View File

@@ -0,0 +1,74 @@
{
"db_name": "SQLite",
"query": "SELECT \n ep.id as \"id!: Uuid\", \n ep.task_attempt_id as \"task_attempt_id!: Uuid\", \n ep.run_reason as \"run_reason!: ExecutionProcessRunReason\",\n ep.executor_action as \"executor_action!: sqlx::types::Json<ExecutorActionField>\",\n ep.status as \"status!: ExecutionProcessStatus\",\n ep.exit_code,\n ep.started_at as \"started_at!: DateTime<Utc>\",\n ep.completed_at as \"completed_at?: DateTime<Utc>\",\n ep.created_at as \"created_at!: DateTime<Utc>\", \n ep.updated_at as \"updated_at!: DateTime<Utc>\"\n FROM execution_processes ep\n JOIN task_attempts ta ON ep.task_attempt_id = ta.id\n JOIN tasks t ON ta.task_id = t.id\n WHERE ep.status = 'running' \n AND ep.run_reason = 'devserver'\n AND t.project_id = $1\n ORDER BY ep.created_at ASC",
"describe": {
"columns": [
{
"name": "id!: Uuid",
"ordinal": 0,
"type_info": "Blob"
},
{
"name": "task_attempt_id!: Uuid",
"ordinal": 1,
"type_info": "Blob"
},
{
"name": "run_reason!: ExecutionProcessRunReason",
"ordinal": 2,
"type_info": "Text"
},
{
"name": "executor_action!: sqlx::types::Json<ExecutorActionField>",
"ordinal": 3,
"type_info": "Text"
},
{
"name": "status!: ExecutionProcessStatus",
"ordinal": 4,
"type_info": "Text"
},
{
"name": "exit_code",
"ordinal": 5,
"type_info": "Integer"
},
{
"name": "started_at!: DateTime<Utc>",
"ordinal": 6,
"type_info": "Text"
},
{
"name": "completed_at?: DateTime<Utc>",
"ordinal": 7,
"type_info": "Text"
},
{
"name": "created_at!: DateTime<Utc>",
"ordinal": 8,
"type_info": "Text"
},
{
"name": "updated_at!: DateTime<Utc>",
"ordinal": 9,
"type_info": "Text"
}
],
"parameters": {
"Right": 1
},
"nullable": [
true,
false,
false,
false,
false,
true,
false,
true,
false,
false
]
},
"hash": "3baa595eadaa8c720da7c185c5fce08f973355fd7809e2caaf966d207bcb7b4b"
}

View File

@@ -0,0 +1,12 @@
{
"db_name": "SQLite",
"query": "UPDATE executor_sessions \n SET summary = $1, updated_at = $2 \n WHERE execution_process_id = $3",
"describe": {
"columns": [],
"parameters": {
"Right": 3
},
"nullable": []
},
"hash": "4a52af0e7eedb3662a05b23e9a0c74c08d6c255ef598bb8ec3ff9a67f2344ab1"
}

View File

@@ -0,0 +1,12 @@
{
"db_name": "SQLite",
"query": "INSERT INTO execution_process_logs (execution_id, logs, byte_size, inserted_at)\n VALUES ($1, $2, $3, datetime('now', 'subsec'))\n ON CONFLICT (execution_id) DO UPDATE\n SET logs = logs || $2,\n byte_size = byte_size + $3,\n inserted_at = datetime('now', 'subsec')",
"describe": {
"columns": [],
"parameters": {
"Right": 3
},
"nullable": []
},
"hash": "56238751ac9cab8bd97ad787143d91f54c47089c8e732ef80c3d1e85dfba1430"
}

View File

@@ -1,10 +1,10 @@
{
"db_name": "SQLite",
"query": "SELECT COUNT(*) as count FROM task_attempts WHERE worktree_path = ?",
"query": "SELECT EXISTS(SELECT 1 FROM task_attempts WHERE container_ref = ?) as \"exists!: bool\"",
"describe": {
"columns": [
{
"name": "count",
"name": "exists!: bool",
"ordinal": 0,
"type_info": "Integer"
}
@@ -16,5 +16,5 @@
false
]
},
"hash": "93a1605f90e9672dad29b472b6ad85fa9a55ea3ffa5abcb8724b09d61be254ca"
"hash": "62836ddbbe22ea720063ac2b8d3f5efa39bf018b01b7a1f5ff6eefc9e4c55445"
}

Some files were not shown because too many files have changed in this diff Show More