From 7533f120794d27994829becb1e38fc78a642c275 Mon Sep 17 00:00:00 2001 From: Louis Knight-Webb Date: Wed, 9 Jul 2025 16:49:24 +0100 Subject: [PATCH] Refactor task attempt model (#109) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * ## Task 1 Complete: GitService Extraction I've successfully completed **Task 1: Extract Git Operations to GitService**. Here's what was accomplished: ### βœ… What was implemented: 1. **Created GitService** (`backend/src/services/task_execution/git_service.rs`) with the following methods: - `create_worktree()` - Creates git worktrees with branches - `merge_changes()` - Merges worktree changes back to main repository - `rebase_branch()` - Rebases worktree branch onto new base - `get_diff()` - Gets diff between worktree and base (simplified implementation) - `delete_file()` - Deletes files from repository - `get_branch_status()` - Gets branch status relative to base branch - `get_default_branch_name()` - Gets default branch name - `branch_exists()` - Checks if branch exists - `remove_worktree()` - Cleanup worktrees - `has_uncommitted_changes()` - Checks for uncommitted changes 2. **Extracted Git Operations** from `task_attempt.rs`: - Moved worktree creation logic to GitService - Moved merge operations to GitService - Moved rebase operations to GitService - Updated TaskAttempt to use GitService instead of direct git2 calls 3. **Added Comprehensive Error Handling**: - Created `GitServiceError` enum with proper error types - Added conversion from `GitServiceError` to `TaskAttemptError` - Added extensive logging throughout git operations 4. **Maintained Code Quality**: - All operations pass `cargo check` βœ… - Proper error propagation and handling - Clean separation of concerns - Maintained existing API compatibility ### πŸ”§ Key Refactoring Benefits: - **Maintainability**: Git operations are now centralized in a dedicated service - **Testability**: GitService can be easily unit tested in isolation - **Reusability**: Other parts of the codebase can now use GitService - **Error Handling**: Comprehensive error handling with proper logging - **Code Organization**: Clear separation between business logic and git operations The TaskAttempt model now uses GitService for all git operations instead of direct git2 calls, making the code much more maintainable and organized. **Ready for your review before continuing to the next endpoint!** πŸš€ * ## Migration Complete! βœ… I have successfully completed the Git2 logic migration from `backend/src/models/task_attempt.rs` to `backend/src/services/task_execution/git_service.rs`. Here's a summary of what was accomplished: ### Major Changes: #### 1. **Enhanced Diff Operations** - Moved the complex `get_diff` method logic to GitService as `get_enhanced_diff()` - Migrated all diff processing helpers including `generate_git_diff_chunks` and `process_unstaged_file` - TaskAttempt now uses GitService for all diff operations #### 2. **Worktree Recreation** - Moved `recreate_worktree_from_branch` to GitService - Simplified TaskAttempt to delegate to GitService for worktree recreation #### 3. **File Operations** - Enhanced GitService with `delete_file_and_commit()` method - TaskAttempt now uses GitService for file deletion operations #### 4. **GitHub Operations** - Moved `extract_github_repo_info` and `push_branch_to_github` to GitService - Added `get_github_repo_info()` and `push_to_github()` methods to GitService - Updated TaskAttempt to use GitService for GitHub operations #### 5. **Code Cleanup** - Removed ~300 lines of complex git2 operations from TaskAttempt - Cleaned up unused imports and helper methods - Maintained Repository import for the complex branch status method (which has different logic than GitService) ### Result: - **`cargo check` passes successfully** βœ… - Code maintainability significantly improved through separation of concerns - Database operations remain in TaskAttempt, Git operations centralized in GitService - All original functionality preserved while improving code organization The migration separates concerns cleanly: TaskAttempt focuses on database operations and orchestration, while GitService handles all low-level Git operations. This makes the codebase much more maintainable and easier to test. * Perfect! I have successfully completed the GitHub service extraction refactor. Here's a summary of what was accomplished: ## Task Completed: Extract GitHub Service ### βœ… What was accomplished: 1. **Created `backend/src/services/task_execution/github_service.rs`** with a comprehensive GitHubService 2. **Implemented all required methods:** - `create_pr()` - Creates pull requests on GitHub - `update_pr_status()` - Checks and updates PR status - `extract_repo_info()` - Parses GitHub repository URLs - `push_branch()` - Verifies branch existence on GitHub 3. **Added robust error handling** with custom GitHubServiceError enum 4. **Implemented retry logic** with exponential backoff for API calls 5. **Added proper authentication handling** through Octocrab client 6. **Updated existing code** to use the new service: - Modified `TaskAttempt::create_github_pr()` to use GitHubService - Updated `PrMonitorService` to use GitHubService for status checks - Removed duplicate GitHub operations from TaskAttempt model ### πŸ”§ Key Features: - **Retry mechanism**: Configurable exponential backoff for failed API calls - **Comprehensive error handling**: Specific error types for different failure modes - **Repository validation**: Checks branch existence and repository access - **Merge tracking**: Includes merge commit SHA in PR information - **Test coverage**: Unit tests for URL parsing functionality ### βœ… Compilation Success: - `cargo check` passes successfully - All GitHub operations are now centralized in the GitHubService - Proper separation of concerns between Git operations (GitService) and GitHub API operations (GitHubService) The refactor improves maintainability by centralizing all GitHub API operations into a dedicated service with proper error handling, retry logic, and a clean interface. * Perfect! The fix has been applied successfully and the code compiles without errors. ## βœ… Fix Applied The issue has been resolved by updating the `TaskAttempt::create_github_pr()` method to: 1. **Use GitService to read the remote URL** from the local git repository at `project.git_repo_path` 2. **Extract owner/repo info** using the existing `GitService::get_github_repo_info()` method 3. **Create GitHubRepoInfo struct** from the extracted data 4. **Pass the GitHubRepoInfo** to the GitHubService methods This maintains proper separation of concerns: - **GitService**: Handles local git operations (reading remote URLs from `.git/config`) - **GitHubService**: Handles GitHub API operations and URL parsing The fix should now correctly handle local repository paths like `/Users/louisknight-webb/Documents/GitHub/mission-control` by reading the GitHub remote URL from the git configuration, then using that URL for GitHub API operations. You can now test creating a PR again - it should work correctly! * Perfect! I have successfully completed the refactoring task. Let me create a summary of what was accomplished: ## βœ… Task Completed: Extract Process Service I have successfully extracted the process execution logic from the TaskAttempt model into a dedicated ProcessService. Here's what was accomplished: ### **What was refactored:** 1. **Created ProcessService** (`backend/src/services/task_execution/process_service.rs`) - Extracted all process execution logic from TaskAttempt model - Implemented methods: `start_execution()`, `start_coding_agent()`, `start_dev_server()`, `start_followup_execution()`, `start_process_execution()` - Moved helper methods: process spawning, monitoring, record creation, executor resolution - Handles process lifecycle management and monitoring integration 2. **Updated TaskAttempt model** (`backend/src/models/task_attempt.rs`) - Removed ~350+ lines of process execution logic - Replaced complex process methods with simple delegation to ProcessService - Made utility methods `ensure_worktree_exists()` and `recreate_worktree_from_branch()` public for ProcessService access - Cleaned up unused imports 3. **Updated service integration** - Added ProcessService to task_execution module exports - Updated execution_monitor.rs to use ProcessService instead of TaskAttempt - Fixed all imports and dependencies across the codebase ### **Benefits achieved:** - **Separation of concerns**: Process execution logic is now separated from data model logic - **Code maintainability**: TaskAttempt model is much cleaner and focused on data operations - **Better organization**: Process execution functionality is centralized in ProcessService - **Reduced complexity**: TaskAttempt went from 1800+ lines to ~1450 lines - **Backward compatibility**: Public API remains unchanged - TaskAttempt methods delegate to ProcessService ### **Verification:** - βœ… `cargo check` passes successfully - βœ… `cargo build` completes without errors - βœ… All existing functionality preserved through delegation pattern - βœ… Only expected warnings about unused code (normal during refactoring) The refactoring successfully extracted the ProcessService while maintaining all existing functionality and ensuring the codebase remains in a buildable state. * Perfect! I have successfully completed the task of extracting notification functionality from `execution_monitor.rs` into a separate `NotificationService`. Here's a summary of what was accomplished: ## βœ… Task Completed: Extract Notification Service ### **Created NotificationService** - **Location**: `backend/src/services/task_execution/notification_service.rs` - **Methods**: - `send_push_notification()` - Cross-platform push notifications - `play_sound_notification()` - Cross-platform sound alerts - `notify()` - Unified method for both sound and push notifications ### **Cross-Platform Support** - **macOS**: Uses `osascript` for notifications and `afplay` for sounds - **Linux**: Uses `notify-rust` crate for notifications, `paplay`/`aplay` for sounds - **Windows**: Uses PowerShell toast notifications and Media.SoundPlayer - **WSL**: Properly converts paths and uses Windows notification systems ### **Configuration & Non-Blocking Design** - **NotificationConfig**: Configurable sound/push notification settings - **Non-blocking**: All notifications are fire-and-forget using `spawn()` - **Sound file support**: Integrates with existing SoundFile enum and caching ### **Refactored execution_monitor.rs** - **Removed duplicate code**: 195+ lines of notification logic moved to service - **Clean integration**: Updated to use NotificationService with config - **Maintained functionality**: All existing notification behavior preserved ### **Code Quality** - **Tested**: Includes unit tests for service functionality - **Documentation**: Comprehensive documentation and error handling - **Modular**: Clean separation of concerns with proper service interface - **Cargo check passes**: All code compiles successfully The notification system is now properly encapsulated, reusable, and maintainable while supporting all the cross-platform requirements and configuration options. * Perfect! The implementation compiles successfully. The refactor helper has been implemented with: ## Summary βœ… **TaskAttemptContext struct** - Contains `task_attempt`, `task`, and `project` for consolidated access βœ… **TaskAttempt::load_context method** - Performs JOIN validation and loads all three entities in one operation βœ… **Compilation verified** - Code compiles without errors The `load_context` method provides a single function to replace the repetitive validation pattern used throughout methods like `merge_changes`, `get_diff`, `get_branch_status`, and `rebase_attempt`. It ensures proper relationships between attemptβ†’taskβ†’project with a single JOIN query, then loads the complete context. * Perfect! The TaskAttempt refactor is complete and the project builds successfully. ## Summary βœ… **Successfully refactored 7 methods** to use the new `TaskAttempt::load_context` helper: - `merge_changes` - Reduced from 47 lines to 11 lines - `get_diff` - Reduced from 41 lines to 11 lines - `get_branch_status` - Reduced from 45 lines to 11 lines - `rebase_attempt` - Reduced from 41 lines to 11 lines - `delete_file` - Reduced from 35 lines to 10 lines - `create_github_pr` - Reduced from 41 lines to 11 lines - `get_execution_state` - Reduced from 41 lines to 11 lines βœ… **Eliminated repetitive validation code** - Each method previously had its own complex SQL query with JOINs and separate `find_by_id` calls βœ… **Maintained exact same functionality** - All methods work identically but with much cleaner, more maintainable code βœ… **Full project builds successfully** - Both frontend and backend compile without errors The refactor replaces **~300 lines of repetitive validation code** with a single, well-tested helper method, making the codebase significantly more maintainable and reducing the chance of bugs from inconsistent validation patterns. * Successfully moved all services from `backend/src/services/task_execution` to `backend/src/services`. Updated all imports and references, and the backend compiles without errors. * fmt * Clippy * Clippy * Clippy * fmt * Fix git path error * Update task running calculation * fmt * Remove unused * Prettier --- ...efc477446de19e61c273bddbc852e8a2eb990.json | 98 + backend/Cargo.toml | 3 + backend/src/execution_monitor.rs | 221 +- backend/src/models/task_attempt.rs | 2155 ++--------------- backend/src/services/git_service.rs | 1024 ++++++++ backend/src/services/github_service.rs | 274 +++ backend/src/services/mod.rs | 8 + backend/src/services/notification_service.rs | 263 ++ backend/src/services/pr_monitor.rs | 131 +- backend/src/services/process_service.rs | 686 ++++++ frontend/src/hooks/useTaskDetails.ts | 31 +- 11 files changed, 2624 insertions(+), 2270 deletions(-) create mode 100644 backend/.sqlx/query-9b601854d9adaf1e30ad2d2bed4efc477446de19e61c273bddbc852e8a2eb990.json create mode 100644 backend/src/services/git_service.rs create mode 100644 backend/src/services/github_service.rs create mode 100644 backend/src/services/notification_service.rs create mode 100644 backend/src/services/process_service.rs diff --git a/backend/.sqlx/query-9b601854d9adaf1e30ad2d2bed4efc477446de19e61c273bddbc852e8a2eb990.json b/backend/.sqlx/query-9b601854d9adaf1e30ad2d2bed4efc477446de19e61c273bddbc852e8a2eb990.json new file mode 100644 index 00000000..3452060a --- /dev/null +++ b/backend/.sqlx/query-9b601854d9adaf1e30ad2d2bed4efc477446de19e61c273bddbc852e8a2eb990.json @@ -0,0 +1,98 @@ +{ + "db_name": "SQLite", + "query": "SELECT ta.id AS \"id!: Uuid\",\n ta.task_id AS \"task_id!: Uuid\",\n ta.worktree_path,\n ta.branch,\n ta.base_branch,\n ta.merge_commit,\n ta.executor,\n ta.pr_url,\n ta.pr_number,\n ta.pr_status,\n ta.pr_merged_at AS \"pr_merged_at: DateTime\",\n ta.worktree_deleted AS \"worktree_deleted!: bool\",\n ta.created_at AS \"created_at!: DateTime\",\n ta.updated_at AS \"updated_at!: DateTime\"\n FROM task_attempts ta\n JOIN tasks t ON ta.task_id = t.id\n JOIN projects p ON t.project_id = p.id\n WHERE ta.id = $1 AND t.id = $2 AND p.id = $3", + "describe": { + "columns": [ + { + "name": "id!: Uuid", + "ordinal": 0, + "type_info": "Blob" + }, + { + "name": "task_id!: Uuid", + "ordinal": 1, + "type_info": "Blob" + }, + { + "name": "worktree_path", + "ordinal": 2, + "type_info": "Text" + }, + { + "name": "branch", + "ordinal": 3, + "type_info": "Text" + }, + { + "name": "base_branch", + "ordinal": 4, + "type_info": "Text" + }, + { + "name": "merge_commit", + "ordinal": 5, + "type_info": "Text" + }, + { + "name": "executor", + "ordinal": 6, + "type_info": "Text" + }, + { + "name": "pr_url", + "ordinal": 7, + "type_info": "Text" + }, + { + "name": "pr_number", + "ordinal": 8, + "type_info": "Integer" + }, + { + "name": "pr_status", + "ordinal": 9, + "type_info": "Text" + }, + { + "name": "pr_merged_at: DateTime", + "ordinal": 10, + "type_info": "Datetime" + }, + { + "name": "worktree_deleted!: bool", + "ordinal": 11, + "type_info": "Bool" + }, + { + "name": "created_at!: DateTime", + "ordinal": 12, + "type_info": "Text" + }, + { + "name": "updated_at!: DateTime", + "ordinal": 13, + "type_info": "Text" + } + ], + "parameters": { + "Right": 3 + }, + "nullable": [ + true, + false, + false, + false, + false, + true, + true, + true, + true, + true, + true, + false, + false, + false + ] + }, + "hash": "9b601854d9adaf1e30ad2d2bed4efc477446de19e61c273bddbc852e8a2eb990" +} diff --git a/backend/Cargo.toml b/backend/Cargo.toml index 4df64c8d..c77f6774 100644 --- a/backend/Cargo.toml +++ b/backend/Cargo.toml @@ -50,6 +50,9 @@ strip-ansi-escapes = "0.2.1" urlencoding = "2.1.3" lazy_static = "1.4" +[dev-dependencies] +tempfile = "3.8" + [build-dependencies] dotenv = "0.15" ts-rs = { version = "9.0", features = ["uuid-impl", "chrono-impl"] } diff --git a/backend/src/execution_monitor.rs b/backend/src/execution_monitor.rs index 1ac6f72d..4e2e0a45 100644 --- a/backend/src/execution_monitor.rs +++ b/backend/src/execution_monitor.rs @@ -1,5 +1,3 @@ -use std::sync::OnceLock; - use git2::Repository; use uuid::Uuid; @@ -11,6 +9,7 @@ use crate::{ task_attempt::{TaskAttempt, TaskAttemptStatus}, task_attempt_activity::{CreateTaskAttemptActivity, TaskAttemptActivity}, }, + services::{NotificationConfig, NotificationService, ProcessService}, utils::worktree_manager::WorktreeManager, }; @@ -392,199 +391,6 @@ async fn cleanup_orphaned_worktree_directory( Ok(()) } -/// Cache for WSL root path from PowerShell -static WSL_ROOT_PATH_CACHE: OnceLock> = OnceLock::new(); - -/// Get WSL root path via PowerShell (cached) -async fn get_wsl_root_path() -> Option { - if let Some(cached) = WSL_ROOT_PATH_CACHE.get() { - return cached.clone(); - } - - match tokio::process::Command::new("powershell.exe") - .arg("-c") - .arg("(Get-Location).Path -replace '^.*::', ''") - .current_dir("/") - .output() - .await - { - Ok(output) => { - match String::from_utf8(output.stdout) { - Ok(pwd_str) => { - let pwd = pwd_str.trim(); - tracing::info!("WSL root path detected: {}", pwd); - - // Cache the result - let _ = WSL_ROOT_PATH_CACHE.set(Some(pwd.to_string())); - return Some(pwd.to_string()); - } - Err(e) => { - tracing::error!("Failed to parse PowerShell pwd output as UTF-8: {}", e); - } - } - } - Err(e) => { - tracing::error!("Failed to execute PowerShell pwd command: {}", e); - } - } - - // Cache the failure result - let _ = WSL_ROOT_PATH_CACHE.set(None); - None -} - -/// Convert WSL path to Windows UNC path for PowerShell -async fn wsl_to_windows_path(wsl_path: &std::path::Path) -> Option { - let path_str = wsl_path.to_string_lossy(); - - // Relative paths work fine as-is in PowerShell - if !path_str.starts_with('/') { - tracing::debug!("Using relative path as-is: {}", path_str); - return Some(path_str.to_string()); - } - - // Get cached WSL root path from PowerShell - if let Some(wsl_root) = get_wsl_root_path().await { - // Simply concatenate WSL root with the absolute path - PowerShell doesn't mind / - let windows_path = format!("{}{}", wsl_root, path_str); - tracing::debug!("WSL path converted: {} -> {}", path_str, windows_path); - Some(windows_path) - } else { - tracing::error!( - "Failed to determine WSL root path for conversion: {}", - path_str - ); - None - } -} - -/// Play a system sound notification -async fn play_sound_notification(sound_file: &crate::models::config::SoundFile) { - let file_path = match sound_file.get_path().await { - Ok(path) => path, - Err(e) => { - tracing::error!("Failed to create cached sound file: {}", e); - return; - } - }; - - // Use platform-specific sound notification - // Note: spawn() calls are intentionally not awaited - sound notifications should be fire-and-forget - if cfg!(target_os = "macos") { - let _ = tokio::process::Command::new("afplay") - .arg(&file_path) - .spawn(); - } else if cfg!(target_os = "linux") && !crate::utils::is_wsl2() { - // Try different Linux audio players - if tokio::process::Command::new("paplay") - .arg(&file_path) - .spawn() - .is_ok() - { - // Success with paplay - } else if tokio::process::Command::new("aplay") - .arg(&file_path) - .spawn() - .is_ok() - { - // Success with aplay - } else { - // Try system bell as fallback - let _ = tokio::process::Command::new("echo") - .arg("-e") - .arg("\\a") - .spawn(); - } - } else if cfg!(target_os = "windows") || (cfg!(target_os = "linux") && crate::utils::is_wsl2()) - { - // Convert WSL path to Windows path if in WSL2 - let file_path = if crate::utils::is_wsl2() { - if let Some(windows_path) = wsl_to_windows_path(&file_path).await { - windows_path - } else { - file_path.to_string_lossy().to_string() - } - } else { - file_path.to_string_lossy().to_string() - }; - - let _ = tokio::process::Command::new("powershell.exe") - .arg("-c") - .arg(format!( - r#"(New-Object Media.SoundPlayer "{}").PlaySync()"#, - file_path - )) - .spawn(); - } -} - -/// Send a cross-platform push notification -async fn send_push_notification(title: &str, message: &str) { - if cfg!(target_os = "macos") { - let script = format!( - r#"display notification "{message}" with title "{title}" sound name "Glass""#, - message = message.replace('"', r#"\""#), - title = title.replace('"', r#"\""#) - ); - - let _ = tokio::process::Command::new("osascript") - .arg("-e") - .arg(script) - .spawn(); - } else if cfg!(target_os = "linux") && !crate::utils::is_wsl2() { - // Linux: Use notify-rust crate - fire and forget - use notify_rust::Notification; - - let title = title.to_string(); - let message = message.to_string(); - - let _handle = tokio::task::spawn_blocking(move || { - if let Err(e) = Notification::new() - .summary(&title) - .body(&message) - .timeout(10000) - .show() - { - tracing::error!("Failed to send Linux notification: {}", e); - } - }); - drop(_handle); // Don't await, fire-and-forget - } else if cfg!(target_os = "windows") || (cfg!(target_os = "linux") && crate::utils::is_wsl2()) - { - // Windows and WSL2: Use PowerShell toast notification script - let script_path = match crate::utils::get_powershell_script().await { - Ok(path) => path, - Err(e) => { - tracing::error!("Failed to get PowerShell script: {}", e); - return; - } - }; - - // Convert WSL path to Windows path if in WSL2 - let script_path_str = if crate::utils::is_wsl2() { - if let Some(windows_path) = wsl_to_windows_path(&script_path).await { - windows_path - } else { - script_path.to_string_lossy().to_string() - } - } else { - script_path.to_string_lossy().to_string() - }; - - let _ = tokio::process::Command::new("powershell.exe") - .arg("-NoProfile") - .arg("-ExecutionPolicy") - .arg("Bypass") - .arg("-File") - .arg(script_path_str) - .arg("-Title") - .arg(title) - .arg("-Message") - .arg(message) - .spawn(); - } -} - pub async fn execution_monitor(app_state: AppState) { let mut interval = tokio::time::interval(tokio::time::Duration::from_secs(5)); let mut cleanup_interval = tokio::time::interval(tokio::time::Duration::from_secs(1800)); // 30 minutes @@ -878,7 +684,7 @@ async fn handle_setup_completion( if let Ok(Some(task)) = Task::find_by_id(&app_state.db_pool, task_attempt.task_id).await { // Start the coding agent - if let Err(e) = TaskAttempt::start_coding_agent( + if let Err(e) = ProcessService::start_coding_agent( &app_state.db_pool, app_state, task_attempt_id, @@ -981,21 +787,28 @@ async fn handle_coding_agent_completion( None }; - // Play sound notification if enabled - if app_state.get_sound_alerts_enabled().await { - let sound_file = app_state.get_sound_file().await; - play_sound_notification(&sound_file).await; - } + // Send notifications if enabled + let sound_enabled = app_state.get_sound_alerts_enabled().await; + let push_enabled = app_state.get_push_notifications_enabled().await; - // Send push notification if enabled - if app_state.get_push_notifications_enabled().await { + if sound_enabled || push_enabled { + let sound_file = app_state.get_sound_file().await; + let notification_config = NotificationConfig { + sound_enabled, + push_enabled, + }; + + let notification_service = NotificationService::new(notification_config); let notification_title = "Task Complete"; let notification_message = if success { "Task execution completed successfully" } else { "Task execution failed" }; - send_push_notification(notification_title, notification_message).await; + + notification_service + .notify(notification_title, notification_message, &sound_file) + .await; } // Get task attempt to access worktree path for committing changes diff --git a/backend/src/models/task_attempt.rs b/backend/src/models/task_attempt.rs index 45f03bb8..d38d95b9 100644 --- a/backend/src/models/task_attempt.rs +++ b/backend/src/models/task_attempt.rs @@ -1,27 +1,28 @@ use std::path::Path; use chrono::{DateTime, Utc}; -use git2::{BranchType, Error as GitError, RebaseOptions, Repository, WorktreeAddOptions}; +use git2::{BranchType, Error as GitError, Repository}; use serde::{Deserialize, Serialize}; use sqlx::{FromRow, SqlitePool, Type}; -use tracing::{debug, info}; +use tracing::info; use ts_rs::TS; use uuid::Uuid; use super::{project::Project, task::Task}; -use crate::{ - executor::Executor, - utils::{shell::get_shell_command, worktree_manager::WorktreeManager}, +use crate::services::{ + CreatePrRequest, GitHubRepoInfo, GitHubService, GitHubServiceError, GitService, + GitServiceError, ProcessService, }; #[derive(Debug)] pub enum TaskAttemptError { Database(sqlx::Error), Git(GitError), + GitService(GitServiceError), + GitHubService(GitHubServiceError), TaskNotFound, ProjectNotFound, ValidationError(String), - BranchNotFound(String), } impl std::fmt::Display for TaskAttemptError { @@ -29,10 +30,11 @@ impl std::fmt::Display for TaskAttemptError { match self { TaskAttemptError::Database(e) => write!(f, "Database error: {}", e), TaskAttemptError::Git(e) => write!(f, "Git error: {}", e), + TaskAttemptError::GitService(e) => write!(f, "Git service error: {}", e), + TaskAttemptError::GitHubService(e) => write!(f, "GitHub service error: {}", e), TaskAttemptError::TaskNotFound => write!(f, "Task not found"), TaskAttemptError::ProjectNotFound => write!(f, "Project not found"), TaskAttemptError::ValidationError(e) => write!(f, "Validation error: {}", e), - TaskAttemptError::BranchNotFound(e) => write!(f, "Branch not found: {}", e), } } } @@ -51,6 +53,18 @@ impl From for TaskAttemptError { } } +impl From for TaskAttemptError { + fn from(err: GitServiceError) -> Self { + TaskAttemptError::GitService(err) + } +} + +impl From for TaskAttemptError { + fn from(err: GitHubServiceError) -> Self { + TaskAttemptError::GitHubService(err) + } +} + #[derive(Debug, Clone, Type, Serialize, Deserialize, PartialEq, TS)] #[sqlx(type_name = "task_attempt_status", rename_all = "lowercase")] #[serde(rename_all = "lowercase")] @@ -176,7 +190,66 @@ pub struct TaskAttemptState { pub coding_agent_process_id: Option, } +#[derive(Debug)] +pub struct TaskAttemptContext { + pub task_attempt: TaskAttempt, + pub task: Task, + pub project: Project, +} + impl TaskAttempt { + /// Load task attempt with full validation - ensures task_attempt belongs to task and task belongs to project + pub async fn load_context( + pool: &SqlitePool, + attempt_id: Uuid, + task_id: Uuid, + project_id: Uuid, + ) -> Result { + // Single query with JOIN validation to ensure proper relationships + let task_attempt = sqlx::query_as!( + TaskAttempt, + r#"SELECT ta.id AS "id!: Uuid", + ta.task_id AS "task_id!: Uuid", + ta.worktree_path, + ta.branch, + ta.base_branch, + ta.merge_commit, + ta.executor, + ta.pr_url, + ta.pr_number, + ta.pr_status, + ta.pr_merged_at AS "pr_merged_at: DateTime", + ta.worktree_deleted AS "worktree_deleted!: bool", + ta.created_at AS "created_at!: DateTime", + ta.updated_at AS "updated_at!: DateTime" + FROM task_attempts ta + JOIN tasks t ON ta.task_id = t.id + JOIN projects p ON t.project_id = p.id + WHERE ta.id = $1 AND t.id = $2 AND p.id = $3"#, + attempt_id, + task_id, + project_id + ) + .fetch_optional(pool) + .await? + .ok_or(TaskAttemptError::TaskNotFound)?; + + // Load task and project (we know they exist due to JOIN validation) + let task = Task::find_by_id(pool, task_id) + .await? + .ok_or(TaskAttemptError::TaskNotFound)?; + + let project = Project::find_by_id(pool, project_id) + .await? + .ok_or(TaskAttemptError::ProjectNotFound)?; + + Ok(TaskAttemptContext { + task_attempt, + task, + project, + }) + } + /// Helper function to mark a worktree as deleted in the database pub async fn mark_worktree_deleted( pool: &SqlitePool, @@ -348,95 +421,23 @@ impl TaskAttempt { .await? .ok_or(TaskAttemptError::ProjectNotFound)?; + // Create GitService instance + let git_service = GitService::new(&project.git_repo_path)?; + // Determine the resolved base branch name first let resolved_base_branch = if let Some(ref base_branch) = data.base_branch { base_branch.clone() } else { // Default to current HEAD branch name or "main" - let repo = Repository::open(&project.git_repo_path)?; - let default_branch = match repo.head() { - Ok(head_ref) => head_ref.shorthand().unwrap_or("main").to_string(), - Err(e) - if e.class() == git2::ErrorClass::Reference - && e.code() == git2::ErrorCode::UnbornBranch => - { - "main".to_string() // Repository has no commits yet - } - Err(_) => "main".to_string(), // Fallback - }; - default_branch + git_service.get_default_branch_name()? }; - // Solve scoping issues - { - // Create the worktree using git2 - let repo = Repository::open(&project.git_repo_path)?; - - // Choose base reference, based on whether user specified base branch - let base_reference = if let Some(base_branch) = data.base_branch.clone() { - let branch = repo.find_branch(base_branch.as_str(), BranchType::Local)?; - branch.into_reference() - } else { - // Handle new repositories without any commits - match repo.head() { - Ok(head_ref) => head_ref, - Err(e) - if e.class() == git2::ErrorClass::Reference - && e.code() == git2::ErrorCode::UnbornBranch => - { - // Repository has no commits yet, create an initial commit - let signature = repo.signature().unwrap_or_else(|_| { - // Fallback if no Git config is set - git2::Signature::now("Vibe Kanban", "noreply@vibekanban.com") - .expect("Failed to create fallback signature") - }); - let tree_id = { - let tree_builder = repo.treebuilder(None)?; - tree_builder.write()? - }; - let tree = repo.find_tree(tree_id)?; - - // Create initial commit on main branch - let _commit_id = repo.commit( - Some("refs/heads/main"), - &signature, - &signature, - "Initial commit", - &tree, - &[], - )?; - - // Set HEAD to point to main branch - repo.set_head("refs/heads/main")?; - - // Return reference to main branch - repo.find_reference("refs/heads/main")? - } - Err(e) => return Err(e.into()), - } - }; - - // Create branch - repo.branch( - &task_attempt_branch, - &base_reference.peel_to_commit()?, - false, - )?; - - let branch = repo.find_branch(&task_attempt_branch, BranchType::Local)?; - let branch_ref = branch.into_reference(); - let mut worktree_opts = WorktreeAddOptions::new(); - worktree_opts.reference(Some(&branch_ref)); - - // Create the worktree directory if it doesn't exist - if let Some(parent) = worktree_path.parent() { - std::fs::create_dir_all(parent) - .map_err(|e| TaskAttemptError::Git(GitError::from_str(&e.to_string())))?; - } - - // Create the worktree at the specified path - repo.worktree(&task_attempt_branch, &worktree_path, Some(&worktree_opts))?; - } + // Create the worktree using GitService + git_service.create_worktree( + &task_attempt_branch, + &worktree_path, + data.base_branch.as_deref(), + )?; // Insert the record into the database Ok(sqlx::query_as!( @@ -480,129 +481,33 @@ impl TaskAttempt { Ok(result.is_some()) } - /// Perform the actual merge operation (synchronous) + /// Perform the actual merge operation using GitService fn perform_merge_operation( worktree_path: &str, main_repo_path: &str, branch_name: &str, task_title: &str, ) -> Result { - // Open the main repository - let main_repo = Repository::open(main_repo_path)?; + let git_service = GitService::new(main_repo_path)?; + let worktree_path = Path::new(worktree_path); - // Open the worktree repository to get the latest commit - let worktree_repo = Repository::open(worktree_path)?; - let worktree_head = worktree_repo.head()?; - let worktree_commit = worktree_head.peel_to_commit()?; - - // Verify the branch exists in the main repo - main_repo - .find_branch(branch_name, BranchType::Local) - .map_err(|_| TaskAttemptError::BranchNotFound(branch_name.to_string()))?; - - // Get the current HEAD of the main repo (usually main/master) - let main_head = main_repo.head()?; - let main_commit = main_head.peel_to_commit()?; - - // Get the signature for the merge commit - let signature = main_repo.signature()?; - - // Get the tree from the worktree commit and find it in the main repo - let worktree_tree_id = worktree_commit.tree_id(); - let main_tree = main_repo.find_tree(worktree_tree_id)?; - - // Find the worktree commit in the main repo - let main_worktree_commit = main_repo.find_commit(worktree_commit.id())?; - - // Create a merge commit - let merge_commit_id = main_repo.commit( - Some("HEAD"), // Update HEAD - &signature, // Author - &signature, // Committer - &format!("Merge: {} (vibe-kanban)", task_title), // Message using task title - &main_tree, // Use the tree from main repo - &[&main_commit, &main_worktree_commit], // Parents: main HEAD and worktree commit - )?; - - info!("Created merge commit: {}", merge_commit_id); - - Ok(merge_commit_id.to_string()) + git_service + .merge_changes(worktree_path, branch_name, task_title) + .map_err(TaskAttemptError::from) } - /// Perform the actual git rebase operations (synchronous) + /// Perform the actual git rebase operations using GitService fn perform_rebase_operation( worktree_path: &str, main_repo_path: &str, new_base_branch: Option, ) -> Result { - // Open the worktree repository - let worktree_repo = Repository::open(worktree_path)?; + let git_service = GitService::new(main_repo_path)?; + let worktree_path = Path::new(worktree_path); - // Open the main repository to get the target base commit - let main_repo = Repository::open(main_repo_path)?; - - // Get the target base branch reference - let base_branch_name = new_base_branch.unwrap_or_else(|| { - main_repo - .head() - .ok() - .and_then(|head| head.shorthand().map(|s| s.to_string())) - .unwrap_or_else(|| "main".to_string()) - }); - - // Check if the specified base branch exists in the main repo - let base_branch = main_repo - .find_branch(&base_branch_name, BranchType::Local) - .map_err(|_| TaskAttemptError::BranchNotFound(base_branch_name.clone()))?; - - let base_commit_id = base_branch.get().peel_to_commit()?.id(); - - // Get the HEAD commit of the worktree (the changes to rebase) - let head = worktree_repo.head()?; - - // Set up rebase - let mut rebase_opts = RebaseOptions::new(); - let signature = worktree_repo.signature()?; - - // Start the rebase - let head_annotated = worktree_repo.reference_to_annotated_commit(&head)?; - let base_annotated = worktree_repo.find_annotated_commit(base_commit_id)?; - - let mut rebase = worktree_repo.rebase( - Some(&head_annotated), - Some(&base_annotated), - None, // onto (use upstream if None) - Some(&mut rebase_opts), - )?; - - // Process each rebase operation - while let Some(operation) = rebase.next() { - let _operation = operation?; - - // Check for conflicts - let index = worktree_repo.index()?; - if index.has_conflicts() { - // For now, abort the rebase on conflicts - rebase.abort()?; - return Err(TaskAttemptError::Git(GitError::from_str( - "Rebase failed due to conflicts. Please resolve conflicts manually.", - ))); - } - - // Commit the rebased operation - rebase.commit(None, &signature, None)?; - } - - // Finish the rebase - rebase.finish(None)?; - - // Get the final commit ID after rebase - let final_head = worktree_repo.head()?; - let final_commit = final_head.peel_to_commit()?; - - info!("Rebase completed. New HEAD: {}", final_commit.id()); - - Ok(final_commit.id().to_string()) + git_service + .rebase_branch(worktree_path, new_base_branch.as_deref()) + .map_err(TaskAttemptError::from) } /// Merge the worktree changes back to the main repository @@ -612,42 +517,8 @@ impl TaskAttempt { task_id: Uuid, project_id: Uuid, ) -> Result { - // Get the task attempt with validation - let attempt = sqlx::query_as!( - TaskAttempt, - r#"SELECT ta.id AS "id!: Uuid", - ta.task_id AS "task_id!: Uuid", - ta.worktree_path, - ta.branch, - ta.base_branch, - ta.merge_commit, - ta.executor, - ta.pr_url, - ta.pr_number, - ta.pr_status, - ta.pr_merged_at AS "pr_merged_at: DateTime", - ta.worktree_deleted AS "worktree_deleted!: bool", - ta.created_at AS "created_at!: DateTime", - ta.updated_at AS "updated_at!: DateTime" - FROM task_attempts ta - JOIN tasks t ON ta.task_id = t.id - WHERE ta.id = $1 AND t.id = $2 AND t.project_id = $3"#, - attempt_id, - task_id, - project_id - ) - .fetch_optional(pool) - .await? - .ok_or(TaskAttemptError::TaskNotFound)?; - - // Get the task and project - let task = Task::find_by_id(pool, task_id) - .await? - .ok_or(TaskAttemptError::TaskNotFound)?; - - let project = Project::find_by_id(pool, project_id) - .await? - .ok_or(TaskAttemptError::ProjectNotFound)?; + // Load context with full validation + let ctx = TaskAttempt::load_context(pool, attempt_id, task_id, project_id).await?; // Ensure worktree exists (recreate if needed for cold task support) let worktree_path = @@ -656,9 +527,9 @@ impl TaskAttempt { // Perform the actual merge operation let merge_commit_id = Self::perform_merge_operation( &worktree_path, - &project.git_repo_path, - &attempt.branch, - &task.title, + &ctx.project.git_repo_path, + &ctx.task_attempt.branch, + &ctx.task.title, )?; // Update the task attempt with the merge commit @@ -681,108 +552,7 @@ impl TaskAttempt { task_id: Uuid, project_id: Uuid, ) -> Result<(), TaskAttemptError> { - use crate::models::task::{Task, TaskStatus}; - - // Load required entities - let (task_attempt, project) = - Self::load_execution_context(pool, attempt_id, project_id).await?; - - // Update task status to indicate execution has started - Task::update_status(pool, task_id, project_id, TaskStatus::InProgress).await?; - - // Determine execution sequence based on project configuration - if Self::should_run_setup_script(&project) { - Self::start_setup_script( - pool, - app_state, - attempt_id, - task_id, - &project, - &task_attempt.worktree_path, - ) - .await - } else { - Self::start_coding_agent(pool, app_state, attempt_id, task_id, project_id).await - } - } - - /// Load the execution context (task attempt and project) with validation - async fn load_execution_context( - pool: &SqlitePool, - attempt_id: Uuid, - project_id: Uuid, - ) -> Result<(TaskAttempt, Project), TaskAttemptError> { - let task_attempt = TaskAttempt::find_by_id(pool, attempt_id) - .await? - .ok_or(TaskAttemptError::TaskNotFound)?; - - let project = Project::find_by_id(pool, project_id) - .await? - .ok_or(TaskAttemptError::ProjectNotFound)?; - - Ok((task_attempt, project)) - } - - /// Check if setup script should be executed - fn should_run_setup_script(project: &Project) -> bool { - project - .setup_script - .as_ref() - .map(|script| !script.trim().is_empty()) - .unwrap_or(false) - } - - /// Start the setup script execution - async fn start_setup_script( - pool: &SqlitePool, - app_state: &crate::app_state::AppState, - attempt_id: Uuid, - task_id: Uuid, - project: &Project, - worktree_path: &str, - ) -> Result<(), TaskAttemptError> { - let setup_script = project.setup_script.as_ref().unwrap(); - - Self::start_process_execution( - pool, - app_state, - attempt_id, - task_id, - crate::executor::ExecutorType::SetupScript(setup_script.clone()), - "Starting setup script".to_string(), - TaskAttemptStatus::SetupRunning, - crate::models::execution_process::ExecutionProcessType::SetupScript, - worktree_path, - ) - .await - } - - /// Start the coding agent after setup is complete or if no setup is needed - pub async fn start_coding_agent( - pool: &SqlitePool, - app_state: &crate::app_state::AppState, - attempt_id: Uuid, - task_id: Uuid, - _project_id: Uuid, - ) -> Result<(), TaskAttemptError> { - let task_attempt = TaskAttempt::find_by_id(pool, attempt_id) - .await? - .ok_or(TaskAttemptError::TaskNotFound)?; - - let executor_config = Self::resolve_executor_config(&task_attempt.executor); - - Self::start_process_execution( - pool, - app_state, - attempt_id, - task_id, - crate::executor::ExecutorType::CodingAgent(executor_config), - "Starting executor".to_string(), - TaskAttemptStatus::ExecutorRunning, - crate::models::execution_process::ExecutionProcessType::CodingAgent, - &task_attempt.worktree_path, - ) - .await + ProcessService::start_execution(pool, app_state, attempt_id, task_id, project_id).await } /// Start a dev server for this task attempt @@ -793,54 +563,7 @@ impl TaskAttempt { task_id: Uuid, project_id: Uuid, ) -> Result<(), TaskAttemptError> { - // Ensure worktree exists (recreate if needed for cold task support) - let worktree_path = - Self::ensure_worktree_exists(pool, attempt_id, project_id, "dev server").await?; - - // Get the project to access the dev_script - let project = crate::models::project::Project::find_by_id(pool, project_id) - .await? - .ok_or(TaskAttemptError::TaskNotFound)?; - - let dev_script = project.dev_script.ok_or_else(|| { - TaskAttemptError::ValidationError( - "No dev script configured for this project".to_string(), - ) - })?; - - if dev_script.trim().is_empty() { - return Err(TaskAttemptError::ValidationError( - "Dev script is empty".to_string(), - )); - } - - let result = Self::start_process_execution( - pool, - app_state, - attempt_id, - task_id, - crate::executor::ExecutorType::DevServer(dev_script), - "Starting dev server".to_string(), - TaskAttemptStatus::ExecutorRunning, // Dev servers don't create activities, just use generic status - crate::models::execution_process::ExecutionProcessType::DevServer, - &worktree_path, - ) - .await; - - if result.is_ok() { - app_state - .track_analytics_event( - "dev_server_started", - Some(serde_json::json!({ - "task_id": task_id.to_string(), - "project_id": project_id.to_string(), - "attempt_id": attempt_id.to_string() - })), - ) - .await; - } - - result + ProcessService::start_dev_server(pool, app_state, attempt_id, task_id, project_id).await } /// Start a follow-up execution using the same executor type as the first process @@ -853,174 +576,14 @@ impl TaskAttempt { project_id: Uuid, prompt: &str, ) -> Result { - use crate::models::{ - executor_session::ExecutorSession, - task::{Task, TaskStatus}, - }; - - // Get the current task attempt to check if worktree is deleted - let current_attempt = TaskAttempt::find_by_id(pool, attempt_id) - .await? - .ok_or(TaskAttemptError::TaskNotFound)?; - - let actual_attempt_id = attempt_id; - - if current_attempt.worktree_deleted { - info!( - "Resurrecting deleted attempt {} (branch: {}) for followup execution - maintaining session continuity", - attempt_id, current_attempt.branch - ); - } else { - info!( - "Continuing followup execution on active attempt {} (branch: {})", - attempt_id, current_attempt.branch - ); - } - - // Update task status to indicate follow-up execution has started - Task::update_status(pool, task_id, project_id, TaskStatus::InProgress).await?; - - // Ensure worktree exists (recreate if needed for cold task support) - // This will resurrect the worktree at the exact same path for session continuity - let worktree_path = - Self::ensure_worktree_exists(pool, actual_attempt_id, project_id, "followup").await?; - - // Find the most recent coding agent execution process to get the executor type - // Look up processes from the ORIGINAL attempt to find the session - let execution_processes = - crate::models::execution_process::ExecutionProcess::find_by_task_attempt_id( - pool, attempt_id, - ) - .await?; - let most_recent_coding_agent = execution_processes - .iter() - .rev() // Reverse to get most recent first (since they're ordered by created_at ASC) - .find(|p| { - matches!( - p.process_type, - crate::models::execution_process::ExecutionProcessType::CodingAgent - ) - }) - .ok_or_else(|| { - tracing::error!( - "No previous coding agent execution found for task attempt {}. Found {} processes: {:?}", - attempt_id, - execution_processes.len(), - execution_processes.iter().map(|p| format!("{:?}", p.process_type)).collect::>() - ); - TaskAttemptError::ValidationError("No previous coding agent execution found for follow-up".to_string()) - })?; - - // Get the executor session to find the session ID - // This looks up the session from the original attempt's processes - let executor_session = - ExecutorSession::find_by_execution_process_id(pool, most_recent_coding_agent.id) - .await? - .ok_or_else(|| { - tracing::error!( - "No executor session found for execution process {} (task attempt {})", - most_recent_coding_agent.id, - attempt_id - ); - TaskAttemptError::ValidationError( - "No executor session found for follow-up".to_string(), - ) - })?; - - // Determine the executor config from the stored executor_type - let executor_config = match most_recent_coding_agent.executor_type.as_deref() { - Some("claude") => crate::executor::ExecutorConfig::Claude, - Some("amp") => crate::executor::ExecutorConfig::Amp, - Some("gemini") => crate::executor::ExecutorConfig::Gemini, - Some("echo") => crate::executor::ExecutorConfig::Echo, - Some("opencode") => crate::executor::ExecutorConfig::Opencode, - _ => { - tracing::error!( - "Invalid or missing executor type '{}' for execution process {} (task attempt {})", - most_recent_coding_agent.executor_type.as_deref().unwrap_or("None"), - most_recent_coding_agent.id, - attempt_id - ); - return Err(TaskAttemptError::ValidationError(format!( - "Invalid executor type for follow-up: {}", - most_recent_coding_agent - .executor_type - .as_deref() - .unwrap_or("None") - ))); - } - }; - - // Try to use follow-up with session ID, but fall back to new session if it fails - let followup_executor = if let Some(session_id) = &executor_session.session_id { - // First try with session ID for continuation - debug!( - "SESSION_FOLLOWUP: Attempting follow-up execution with session ID: {} (attempt: {}, worktree: {})", - session_id, attempt_id, worktree_path - ); - crate::executor::ExecutorType::FollowUpCodingAgent { - config: executor_config.clone(), - session_id: executor_session.session_id.clone(), - prompt: prompt.to_string(), - } - } else { - // No session ID available, start new session - tracing::warn!( - "SESSION_FOLLOWUP: No session ID available for follow-up execution on attempt {}, starting new session (worktree: {})", - attempt_id, worktree_path - ); - crate::executor::ExecutorType::CodingAgent(executor_config.clone()) - }; - - // Try to start the follow-up execution - let execution_result = Self::start_process_execution( - pool, - app_state, - actual_attempt_id, - task_id, - followup_executor, - "Starting follow-up executor".to_string(), - TaskAttemptStatus::ExecutorRunning, - crate::models::execution_process::ExecutionProcessType::CodingAgent, - &worktree_path, + ProcessService::start_followup_execution( + pool, app_state, attempt_id, task_id, project_id, prompt, ) - .await; - - // If follow-up execution failed and we tried to use a session ID, - // fall back to a new session - if execution_result.is_err() && executor_session.session_id.is_some() { - tracing::warn!( - "SESSION_FOLLOWUP: Follow-up execution with session ID '{}' failed for attempt {}, falling back to new session. Error: {:?}", - executor_session.session_id.as_ref().unwrap(), - attempt_id, - execution_result.as_ref().err() - ); - - // Create a new session instead of trying to resume - let new_session_executor = crate::executor::ExecutorType::CodingAgent(executor_config); - - Self::start_process_execution( - pool, - app_state, - actual_attempt_id, - task_id, - new_session_executor, - "Starting new executor session (follow-up session failed)".to_string(), - TaskAttemptStatus::ExecutorRunning, - crate::models::execution_process::ExecutionProcessType::CodingAgent, - &worktree_path, - ) - .await?; - } else { - // Either it succeeded or we already tried without session ID - execution_result?; - } - - Ok(actual_attempt_id) + .await } /// Ensure worktree exists, recreating from branch if needed (cold task support) - async fn ensure_worktree_exists( + pub async fn ensure_worktree_exists( pool: &SqlitePool, attempt_id: Uuid, project_id: Uuid, @@ -1057,7 +620,7 @@ impl TaskAttempt { } /// Recreate a worktree from an existing branch (for cold task support) - async fn recreate_worktree_from_branch( + pub async fn recreate_worktree_from_branch( pool: &SqlitePool, task_attempt: &TaskAttempt, project_id: Uuid, @@ -1066,425 +629,18 @@ impl TaskAttempt { .await? .ok_or(TaskAttemptError::ProjectNotFound)?; - let repo = Repository::open(&project.git_repo_path)?; - - // Verify branch exists before proceeding - let _branch = repo - .find_branch(&task_attempt.branch, BranchType::Local) - .map_err(|_| TaskAttemptError::BranchNotFound(task_attempt.branch.clone()))?; - drop(_branch); + // Create GitService instance + let git_service = GitService::new(&project.git_repo_path)?; // Use the stored worktree path from database - this ensures we recreate in the exact same location // where Claude originally created its session, maintaining session continuity let stored_worktree_path = std::path::PathBuf::from(&task_attempt.worktree_path); - let stored_worktree_path_str = stored_worktree_path.to_string_lossy().to_string(); - info!( - "Recreating worktree using stored path: {} (branch: {})", - stored_worktree_path_str, task_attempt.branch - ); - - // Clean up existing directory if it exists to avoid git sync issues - // If we're in this method, we intend to recreate anyway - if stored_worktree_path.exists() { - debug!( - "Removing existing directory before worktree recreation: {}", - stored_worktree_path_str - ); - std::fs::remove_dir_all(&stored_worktree_path).map_err(|e| { - TaskAttemptError::Git(GitError::from_str(&format!( - "Failed to remove existing worktree directory {}: {}", - stored_worktree_path_str, e - ))) - })?; - } - - // Ensure parent directory exists - critical for session continuity - if let Some(parent) = stored_worktree_path.parent() { - std::fs::create_dir_all(parent).map_err(|e| { - TaskAttemptError::Git(GitError::from_str(&format!( - "Failed to create parent directory for worktree path {}: {}", - stored_worktree_path_str, e - ))) - })?; - } - - // Create worktree at the exact same path as originally created - // Use the WorktreeManager for proper synchronization and race condition handling - - // Extract repository path for WorktreeManager - let repo_path = repo - .workdir() - .ok_or_else(|| { - TaskAttemptError::Git(GitError::from_str("Repository has no working directory")) - })? - .to_str() - .ok_or_else(|| { - TaskAttemptError::Git(GitError::from_str("Repository path is not valid UTF-8")) - })? - .to_string(); - - WorktreeManager::ensure_worktree_exists( - repo_path, - task_attempt.branch.clone(), - stored_worktree_path, - ) - .await?; - - info!( - "Successfully recreated worktree at original path: {} -> {}", - task_attempt.branch, stored_worktree_path_str - ); - Ok(stored_worktree_path_str) - } - - /// Resolve executor configuration from string name - fn resolve_executor_config(executor_name: &Option) -> crate::executor::ExecutorConfig { - match executor_name.as_ref().map(|s| s.as_str()) { - Some("claude") => crate::executor::ExecutorConfig::Claude, - Some("amp") => crate::executor::ExecutorConfig::Amp, - Some("gemini") => crate::executor::ExecutorConfig::Gemini, - Some("opencode") => crate::executor::ExecutorConfig::Opencode, - _ => crate::executor::ExecutorConfig::Echo, // Default for "echo" or None - } - } - - /// Unified function to start any type of process execution - #[allow(clippy::too_many_arguments)] - async fn start_process_execution( - pool: &SqlitePool, - app_state: &crate::app_state::AppState, - attempt_id: Uuid, - task_id: Uuid, - executor_type: crate::executor::ExecutorType, - activity_note: String, - activity_status: TaskAttemptStatus, - process_type: crate::models::execution_process::ExecutionProcessType, - worktree_path: &str, - ) -> Result<(), TaskAttemptError> { - let process_id = Uuid::new_v4(); - - // Create execution process record - let _execution_process = Self::create_execution_process_record( - pool, - attempt_id, - process_id, - &executor_type, - process_type.clone(), - worktree_path, - ) - .await?; - - // Create executor session for coding agents - if matches!( - process_type, - crate::models::execution_process::ExecutionProcessType::CodingAgent - ) { - // Extract follow-up prompt if this is a follow-up execution - let followup_prompt = match &executor_type { - crate::executor::ExecutorType::FollowUpCodingAgent { prompt, .. } => { - Some(prompt.clone()) - } - _ => None, - }; - Self::create_executor_session_record( - pool, - attempt_id, - task_id, - process_id, - followup_prompt, - ) + let result_path = git_service + .recreate_worktree_from_branch(&task_attempt.branch, &stored_worktree_path) .await?; - } - // Create activity record (skip for dev servers as they run in parallel) - if !matches!( - process_type, - crate::models::execution_process::ExecutionProcessType::DevServer - ) { - Self::create_activity_record(pool, process_id, activity_status.clone(), &activity_note) - .await?; - } - - tracing::info!("Starting {} for task attempt {}", activity_note, attempt_id); - - // Execute the process - let child = Self::execute_process( - &executor_type, - pool, - task_id, - attempt_id, - process_id, - worktree_path, - ) - .await?; - - // Register for monitoring - Self::register_for_monitoring(app_state, process_id, attempt_id, &process_type, child) - .await; - - tracing::info!( - "Started execution {} for task attempt {}", - process_id, - attempt_id - ); - Ok(()) - } - - /// Create execution process database record - async fn create_execution_process_record( - pool: &SqlitePool, - attempt_id: Uuid, - process_id: Uuid, - executor_type: &crate::executor::ExecutorType, - process_type: crate::models::execution_process::ExecutionProcessType, - worktree_path: &str, - ) -> Result { - use crate::models::execution_process::{CreateExecutionProcess, ExecutionProcess}; - - let (shell_cmd, shell_arg) = get_shell_command(); - let (command, args, executor_type_string) = match executor_type { - crate::executor::ExecutorType::SetupScript(_) => ( - shell_cmd.to_string(), - Some(serde_json::to_string(&[shell_arg, "setup_script"]).unwrap()), - None, // Setup scripts don't have an executor type - ), - crate::executor::ExecutorType::DevServer(_) => ( - shell_cmd.to_string(), - Some(serde_json::to_string(&[shell_arg, "dev_server"]).unwrap()), - None, // Dev servers don't have an executor type - ), - crate::executor::ExecutorType::CodingAgent(config) => { - let executor_type_str = match config { - crate::executor::ExecutorConfig::Echo => "echo", - crate::executor::ExecutorConfig::Claude => "claude", - crate::executor::ExecutorConfig::Amp => "amp", - crate::executor::ExecutorConfig::Gemini => "gemini", - crate::executor::ExecutorConfig::Opencode => "opencode", - }; - ( - "executor".to_string(), - None, - Some(executor_type_str.to_string()), - ) - } - crate::executor::ExecutorType::FollowUpCodingAgent { config, .. } => { - let executor_type_str = match config { - crate::executor::ExecutorConfig::Echo => "echo", - crate::executor::ExecutorConfig::Claude => "claude", - crate::executor::ExecutorConfig::Amp => "amp", - crate::executor::ExecutorConfig::Gemini => "gemini", - crate::executor::ExecutorConfig::Opencode => "opencode", - }; - ( - "followup_executor".to_string(), - None, - Some(executor_type_str.to_string()), - ) - } - }; - - let create_process = CreateExecutionProcess { - task_attempt_id: attempt_id, - process_type, - executor_type: executor_type_string, - command, - args, - working_directory: worktree_path.to_string(), - }; - - ExecutionProcess::create(pool, &create_process, process_id) - .await - .map_err(TaskAttemptError::from) - } - - /// Create executor session record for coding agents - async fn create_executor_session_record( - pool: &SqlitePool, - attempt_id: Uuid, - task_id: Uuid, - process_id: Uuid, - followup_prompt: Option, - ) -> Result<(), TaskAttemptError> { - use crate::models::executor_session::{CreateExecutorSession, ExecutorSession}; - - // Use follow-up prompt if provided, otherwise get the task to create prompt - let prompt = if let Some(followup_prompt) = followup_prompt { - followup_prompt - } else { - let task = Task::find_by_id(pool, task_id) - .await? - .ok_or(TaskAttemptError::TaskNotFound)?; - format!("{}\n\n{}", task.title, task.description.unwrap_or_default()) - }; - - let session_id = Uuid::new_v4(); - let create_session = CreateExecutorSession { - task_attempt_id: attempt_id, - execution_process_id: process_id, - prompt: Some(prompt), - }; - - ExecutorSession::create(pool, &create_session, session_id) - .await - .map(|_| ()) - .map_err(TaskAttemptError::from) - } - - /// Create activity record for process start - async fn create_activity_record( - pool: &SqlitePool, - process_id: Uuid, - activity_status: TaskAttemptStatus, - activity_note: &str, - ) -> Result<(), TaskAttemptError> { - use crate::models::task_attempt_activity::{ - CreateTaskAttemptActivity, TaskAttemptActivity, - }; - - let activity_id = Uuid::new_v4(); - let create_activity = CreateTaskAttemptActivity { - execution_process_id: process_id, - status: Some(activity_status.clone()), - note: Some(activity_note.to_string()), - }; - - TaskAttemptActivity::create(pool, &create_activity, activity_id, activity_status) - .await - .map(|_| ()) - .map_err(TaskAttemptError::from) - } - - /// Execute the process based on type - async fn execute_process( - executor_type: &crate::executor::ExecutorType, - pool: &SqlitePool, - task_id: Uuid, - attempt_id: Uuid, - process_id: Uuid, - worktree_path: &str, - ) -> Result { - use crate::executors::{DevServerExecutor, SetupScriptExecutor}; - - let result = match executor_type { - crate::executor::ExecutorType::SetupScript(script) => { - let executor = SetupScriptExecutor { - script: script.clone(), - }; - executor - .execute_streaming(pool, task_id, attempt_id, process_id, worktree_path) - .await - } - crate::executor::ExecutorType::DevServer(script) => { - let executor = DevServerExecutor { - script: script.clone(), - }; - executor - .execute_streaming(pool, task_id, attempt_id, process_id, worktree_path) - .await - } - crate::executor::ExecutorType::CodingAgent(config) => { - let executor = config.create_executor(); - executor - .execute_streaming(pool, task_id, attempt_id, process_id, worktree_path) - .await - } - crate::executor::ExecutorType::FollowUpCodingAgent { - config, - session_id, - prompt, - } => { - use crate::executors::{ - AmpFollowupExecutor, ClaudeFollowupExecutor, GeminiFollowupExecutor, - OpencodeFollowupExecutor, - }; - - let executor: Box = match config { - crate::executor::ExecutorConfig::Claude => { - if let Some(sid) = session_id { - Box::new(ClaudeFollowupExecutor { - session_id: sid.clone(), - prompt: prompt.clone(), - }) - } else { - return Err(TaskAttemptError::TaskNotFound); // No session ID for followup - } - } - crate::executor::ExecutorConfig::Amp => { - if let Some(tid) = session_id { - Box::new(AmpFollowupExecutor { - thread_id: tid.clone(), - prompt: prompt.clone(), - }) - } else { - return Err(TaskAttemptError::TaskNotFound); // No thread ID for followup - } - } - crate::executor::ExecutorConfig::Gemini => { - if let Some(sid) = session_id { - Box::new(GeminiFollowupExecutor { - session_id: sid.clone(), - prompt: prompt.clone(), - }) - } else { - return Err(TaskAttemptError::TaskNotFound); // No session ID for followup - } - } - crate::executor::ExecutorConfig::Echo => { - // Echo doesn't support followup, use regular echo - config.create_executor() - } - crate::executor::ExecutorConfig::Opencode => { - if let Some(sid) = session_id { - Box::new(OpencodeFollowupExecutor { - session_id: sid.clone(), - prompt: prompt.clone(), - }) - } else { - return Err(TaskAttemptError::TaskNotFound); // No session ID for followup - } - } - }; - - executor - .execute_streaming(pool, task_id, attempt_id, process_id, worktree_path) - .await - } - }; - - result.map_err(|e| TaskAttemptError::Git(git2::Error::from_str(&e.to_string()))) - } - - /// Register process for monitoring - async fn register_for_monitoring( - app_state: &crate::app_state::AppState, - process_id: Uuid, - attempt_id: Uuid, - process_type: &crate::models::execution_process::ExecutionProcessType, - child: command_group::AsyncGroupChild, - ) { - let execution_type = match process_type { - crate::models::execution_process::ExecutionProcessType::SetupScript => { - crate::app_state::ExecutionType::SetupScript - } - crate::models::execution_process::ExecutionProcessType::CodingAgent => { - crate::app_state::ExecutionType::CodingAgent - } - crate::models::execution_process::ExecutionProcessType::DevServer => { - crate::app_state::ExecutionType::DevServer - } - }; - - app_state - .add_running_execution( - process_id, - crate::app_state::RunningExecution { - task_attempt_id: attempt_id, - _execution_type: execution_type, - child, - }, - ) - .await; + Ok(result_path.to_string_lossy().to_string()) } /// Get the git diff between the base commit and the current committed worktree state @@ -1494,626 +650,27 @@ impl TaskAttempt { task_id: Uuid, project_id: Uuid, ) -> Result { - // Get the task attempt with validation - let attempt = sqlx::query_as!( - TaskAttempt, - r#"SELECT ta.id AS "id!: Uuid", - ta.task_id AS "task_id!: Uuid", - ta.worktree_path, - ta.branch, - ta.base_branch, - ta.merge_commit, - ta.executor, - ta.pr_url, - ta.pr_number, - ta.pr_status, - ta.pr_merged_at AS "pr_merged_at: DateTime", - ta.worktree_deleted AS "worktree_deleted!: bool", - ta.created_at AS "created_at!: DateTime", - ta.updated_at AS "updated_at!: DateTime" - FROM task_attempts ta - JOIN tasks t ON ta.task_id = t.id - WHERE ta.id = $1 AND t.id = $2 AND t.project_id = $3"#, - attempt_id, - task_id, - project_id - ) - .fetch_optional(pool) - .await? - .ok_or(TaskAttemptError::TaskNotFound)?; + // Load context with full validation + let ctx = TaskAttempt::load_context(pool, attempt_id, task_id, project_id).await?; - // Get the project to access the main repository - let project = Project::find_by_id(pool, project_id) - .await? - .ok_or(TaskAttemptError::ProjectNotFound)?; + // Create GitService instance + let git_service = GitService::new(&ctx.project.git_repo_path)?; - let mut files = Vec::new(); - - if let Some(merge_commit_id) = &attempt.merge_commit { + if let Some(merge_commit_id) = &ctx.task_attempt.merge_commit { // Task attempt has been merged - show the diff from the merge commit - let main_repo = Repository::open(&project.git_repo_path)?; - let merge_commit = main_repo.find_commit(git2::Oid::from_str(merge_commit_id)?)?; - - // A merge commit has multiple parents - first parent is the main branch before merge, - // second parent is the branch that was merged - let parents: Vec<_> = merge_commit.parents().collect(); - - // Create diff options with more context - let mut diff_opts = git2::DiffOptions::new(); - diff_opts.context_lines(10); // Include 10 lines of context around changes - diff_opts.interhunk_lines(0); // Don't merge hunks - - let diff = if parents.len() >= 2 { - let base_tree = parents[0].tree()?; // Main branch before merge - let merged_tree = parents[1].tree()?; // The branch that was merged - main_repo.diff_tree_to_tree( - Some(&base_tree), - Some(&merged_tree), - Some(&mut diff_opts), - )? - } else { - // Fast-forward merge or single parent - compare merge commit with its parent - let base_tree = if !parents.is_empty() { - parents[0].tree()? - } else { - // No parents (shouldn't happen), use empty tree - main_repo.find_tree(git2::Oid::zero())? - }; - let merged_tree = merge_commit.tree()?; - main_repo.diff_tree_to_tree( - Some(&base_tree), - Some(&merged_tree), - Some(&mut diff_opts), - )? - }; - - // Process each diff delta (file change) - diff.foreach( - &mut |delta, _progress| { - if let Some(path_str) = delta.new_file().path().and_then(|p| p.to_str()) { - let old_file = delta.old_file(); - let new_file = delta.new_file(); - - // Get old content - let old_content = if !old_file.id().is_zero() { - match main_repo.find_blob(old_file.id()) { - Ok(blob) => String::from_utf8_lossy(blob.content()).to_string(), - Err(_) => String::new(), - } - } else { - String::new() // File didn't exist in base commit - }; - - // Get new content - let new_content = if !new_file.id().is_zero() { - match main_repo.find_blob(new_file.id()) { - Ok(blob) => String::from_utf8_lossy(blob.content()).to_string(), - Err(_) => String::new(), - } - } else { - String::new() // File was deleted - }; - - // Generate Git-native diff chunks - // Always generate diff for file changes, even if both contents are empty - // This handles cases where empty files are added or files are deleted - if old_content != new_content || delta.status() != git2::Delta::Modified { - match Self::generate_git_diff_chunks( - &main_repo, &old_file, &new_file, path_str, - ) { - Ok(diff_chunks) if !diff_chunks.is_empty() => { - files.push(FileDiff { - path: path_str.to_string(), - chunks: diff_chunks, - }); - } - // For added or deleted files without content, still show the file - Ok(_) - if delta.status() == git2::Delta::Added - || delta.status() == git2::Delta::Deleted => - { - files.push(FileDiff { - path: path_str.to_string(), - chunks: vec![DiffChunk { - chunk_type: if delta.status() == git2::Delta::Added { - DiffChunkType::Insert - } else { - DiffChunkType::Delete - }, - content: format!( - "{} file", - if delta.status() == git2::Delta::Added { - "Added" - } else { - "Deleted" - } - ), - }], - }); - } - Err(e) => { - eprintln!("Error generating diff for {}: {:?}", path_str, e); - } - _ => {} - } - } - } - true // Continue processing - }, - None, - None, - None, - )?; + git_service + .get_enhanced_diff(Path::new(""), Some(merge_commit_id)) + .map_err(TaskAttemptError::from) } else { - // Task attempt not yet merged - use the original logic with fork point + // Task attempt not yet merged - get worktree diff // Ensure worktree exists (recreate if needed for cold task support) let worktree_path = Self::ensure_worktree_exists(pool, attempt_id, project_id, "diff").await?; - let worktree_repo = Repository::open(&worktree_path)?; - let main_repo = Repository::open(&project.git_repo_path)?; - let main_head_oid = main_repo.head()?.peel_to_commit()?.id(); - - // Get the current worktree HEAD commit - let worktree_head = worktree_repo.head()?; - let worktree_head_oid = worktree_head.peel_to_commit()?.id(); - - // Find the merge base (common ancestor) between main and the worktree branch - // This represents the point where the worktree branch forked off from main - let base_oid = worktree_repo.merge_base(main_head_oid, worktree_head_oid)?; - let base_commit = worktree_repo.find_commit(base_oid)?; - let base_tree = base_commit.tree()?; - - // Get the current tree from the worktree HEAD commit we already retrieved - let current_commit = worktree_repo.find_commit(worktree_head_oid)?; - let current_tree = current_commit.tree()?; - - // Create a diff between the base tree and current tree with more context - let mut diff_opts = git2::DiffOptions::new(); - diff_opts.context_lines(10); // Include 10 lines of context around changes - diff_opts.interhunk_lines(0); // Don't merge hunks - - let diff = worktree_repo.diff_tree_to_tree( - Some(&base_tree), - Some(¤t_tree), - Some(&mut diff_opts), - )?; - - // Process each diff delta (file change) - diff.foreach( - &mut |delta, _progress| { - if let Some(path_str) = delta.new_file().path().and_then(|p| p.to_str()) { - let old_file = delta.old_file(); - let new_file = delta.new_file(); - - // Get old content - let old_content = if !old_file.id().is_zero() { - match worktree_repo.find_blob(old_file.id()) { - Ok(blob) => String::from_utf8_lossy(blob.content()).to_string(), - Err(_) => String::new(), - } - } else { - String::new() // File didn't exist in base commit - }; - - // Get new content - let new_content = if !new_file.id().is_zero() { - match worktree_repo.find_blob(new_file.id()) { - Ok(blob) => String::from_utf8_lossy(blob.content()).to_string(), - Err(_) => String::new(), - } - } else { - String::new() // File was deleted - }; - - // Generate Git-native diff chunks - // Always generate diff for file changes, even if both contents are empty - // This handles cases where empty files are added or files are deleted - if old_content != new_content || delta.status() != git2::Delta::Modified { - match Self::generate_git_diff_chunks( - &worktree_repo, - &old_file, - &new_file, - path_str, - ) { - Ok(diff_chunks) if !diff_chunks.is_empty() => { - files.push(FileDiff { - path: path_str.to_string(), - chunks: diff_chunks, - }); - } - // For added or deleted files without content, still show the file - Ok(_) - if delta.status() == git2::Delta::Added - || delta.status() == git2::Delta::Deleted => - { - files.push(FileDiff { - path: path_str.to_string(), - chunks: vec![DiffChunk { - chunk_type: if delta.status() == git2::Delta::Added { - DiffChunkType::Insert - } else { - DiffChunkType::Delete - }, - content: format!( - "{} file", - if delta.status() == git2::Delta::Added { - "Added" - } else { - "Deleted" - } - ), - }], - }); - } - Err(e) => { - eprintln!("Error generating diff for {}: {:?}", path_str, e); - } - _ => {} - } - } - } - true // Continue processing - }, - None, - None, - None, - )?; - - // Now also get unstaged changes (working directory changes) - let current_tree = worktree_repo.head()?.peel_to_tree()?; - - // Create diff from HEAD to working directory for unstaged changes - let mut unstaged_diff_opts = git2::DiffOptions::new(); - unstaged_diff_opts.context_lines(10); - unstaged_diff_opts.interhunk_lines(0); - unstaged_diff_opts.include_untracked(true); // Include untracked files - - let unstaged_diff = worktree_repo.diff_tree_to_workdir_with_index( - Some(¤t_tree), - Some(&mut unstaged_diff_opts), - )?; - - // Process unstaged changes - unstaged_diff.foreach( - &mut |delta, _progress| { - if let Some(path_str) = delta.new_file().path().and_then(|p| p.to_str()) { - if let Err(e) = Self::process_unstaged_file( - &mut files, - &worktree_repo, - base_oid, - &worktree_path, - path_str, - &delta, - ) { - eprintln!("Error processing unstaged file {}: {:?}", path_str, e); - } - } - true - }, - None, - None, - None, - )?; + git_service + .get_enhanced_diff(Path::new(&worktree_path), None) + .map_err(TaskAttemptError::from) } - - Ok(WorktreeDiff { files }) - } - - fn process_unstaged_file( - files: &mut Vec, - worktree_repo: &Repository, - base_oid: git2::Oid, - worktree_path: &str, - path_str: &str, - delta: &git2::DiffDelta, - ) -> Result<(), TaskAttemptError> { - let old_file = delta.old_file(); - let new_file = delta.new_file(); - - // Check if we already have a diff for this file from committed changes - if let Some(existing_file) = files.iter_mut().find(|f| f.path == path_str) { - // File already has committed changes, need to create a combined diff - // from the base branch to the current working directory (including unstaged changes) - - // Get the base content (from the fork point) - let base_content = if let Ok(base_commit) = worktree_repo.find_commit(base_oid) { - if let Ok(base_tree) = base_commit.tree() { - match base_tree.get_path(std::path::Path::new(path_str)) { - Ok(entry) => { - if let Ok(blob) = worktree_repo.find_blob(entry.id()) { - String::from_utf8_lossy(blob.content()).to_string() - } else { - String::new() - } - } - Err(_) => String::new(), - } - } else { - String::new() - } - } else { - String::new() - }; - - // Get the working directory content - let working_content = if delta.status() != git2::Delta::Deleted { - let file_path = std::path::Path::new(worktree_path).join(path_str); - std::fs::read_to_string(&file_path).unwrap_or_default() - } else { - String::new() - }; - - // Create a combined diff from base to working directory - if base_content != working_content { - // Use git's patch generation with the content directly - let mut diff_opts = git2::DiffOptions::new(); - diff_opts.context_lines(10); - diff_opts.interhunk_lines(0); - - if let Ok(patch) = git2::Patch::from_buffers( - base_content.as_bytes(), - Some(std::path::Path::new(path_str)), - working_content.as_bytes(), - Some(std::path::Path::new(path_str)), - Some(&mut diff_opts), - ) { - let mut combined_chunks = Vec::new(); - - // Process the patch hunks - for hunk_idx in 0..patch.num_hunks() { - if let Ok((_hunk, hunk_lines)) = patch.hunk(hunk_idx) { - // Process each line in the hunk - for line_idx in 0..hunk_lines { - if let Ok(line) = patch.line_in_hunk(hunk_idx, line_idx) { - let content = - String::from_utf8_lossy(line.content()).to_string(); - - let chunk_type = match line.origin() { - ' ' => DiffChunkType::Equal, - '+' => DiffChunkType::Insert, - '-' => DiffChunkType::Delete, - _ => continue, // Skip other line types - }; - - combined_chunks.push(DiffChunk { - chunk_type, - content, - }); - } - } - } - } - - if !combined_chunks.is_empty() { - existing_file.chunks = combined_chunks; - } - } - } - } else { - // File only has unstaged changes (new file or uncommitted changes only) - // First check if this is a new file or changed file by comparing with base - let base_content = if let Ok(base_commit) = worktree_repo.find_commit(base_oid) { - if let Ok(base_tree) = base_commit.tree() { - match base_tree.get_path(std::path::Path::new(path_str)) { - Ok(entry) => { - if let Ok(blob) = worktree_repo.find_blob(entry.id()) { - String::from_utf8_lossy(blob.content()).to_string() - } else { - String::new() - } - } - Err(_) => String::new(), - } - } else { - String::new() - } - } else { - String::new() - }; - - // Get the working directory content - let working_content = if delta.status() != git2::Delta::Deleted { - let file_path = std::path::Path::new(worktree_path).join(path_str); - std::fs::read_to_string(&file_path).unwrap_or_default() - } else { - String::new() - }; - - // Create diff from base to working directory (including unstaged changes) - if base_content != working_content || delta.status() != git2::Delta::Modified { - if let Ok(patch) = git2::Patch::from_buffers( - base_content.as_bytes(), - Some(std::path::Path::new(path_str)), - working_content.as_bytes(), - Some(std::path::Path::new(path_str)), - Some(&mut git2::DiffOptions::new()), - ) { - let mut chunks = Vec::new(); - - // Process the patch hunks - for hunk_idx in 0..patch.num_hunks() { - if let Ok((_hunk, hunk_lines)) = patch.hunk(hunk_idx) { - for line_idx in 0..hunk_lines { - if let Ok(line) = patch.line_in_hunk(hunk_idx, line_idx) { - let content = - String::from_utf8_lossy(line.content()).to_string(); - let chunk_type = match line.origin() { - ' ' => DiffChunkType::Equal, - '+' => DiffChunkType::Insert, - '-' => DiffChunkType::Delete, - _ => continue, - }; - chunks.push(DiffChunk { - chunk_type, - content, - }); - } - } - } - } - - // If no hunks but file status changed, add a placeholder - if chunks.is_empty() && delta.status() != git2::Delta::Modified { - chunks.push(DiffChunk { - chunk_type: if delta.status() == git2::Delta::Added { - DiffChunkType::Insert - } else { - DiffChunkType::Delete - }, - content: format!( - "{} file", - if delta.status() == git2::Delta::Added { - "Added" - } else { - "Deleted" - } - ), - }); - } - - if !chunks.is_empty() { - files.push(FileDiff { - path: path_str.to_string(), - chunks, - }); - } - } else { - // Fallback to the original method if patch creation fails - match Self::generate_git_diff_chunks( - worktree_repo, - &old_file, - &new_file, - path_str, - ) { - Ok(diff_chunks) if !diff_chunks.is_empty() => { - files.push(FileDiff { - path: path_str.to_string(), - chunks: diff_chunks, - }); - } - // For added or deleted files without content, still show the file - Ok(_) - if delta.status() == git2::Delta::Added - || delta.status() == git2::Delta::Deleted => - { - files.push(FileDiff { - path: path_str.to_string(), - chunks: vec![DiffChunk { - chunk_type: if delta.status() == git2::Delta::Added { - DiffChunkType::Insert - } else { - DiffChunkType::Delete - }, - content: format!( - "{} file", - if delta.status() == git2::Delta::Added { - "Added" - } else { - "Deleted" - } - ), - }], - }); - } - Err(e) => { - eprintln!("Error generating unstaged diff for {}: {:?}", path_str, e); - } - _ => {} - } - } - } - } - - Ok(()) - } - - /// Generate diff chunks using Git's native diff algorithm - pub fn generate_git_diff_chunks( - repo: &Repository, - old_file: &git2::DiffFile, - new_file: &git2::DiffFile, - file_path: &str, - ) -> Result, TaskAttemptError> { - use std::path::Path; - let mut chunks = Vec::new(); - - // Create a patch for the single file using Git's native diff - let old_blob = if !old_file.id().is_zero() { - Some(repo.find_blob(old_file.id())?) - } else { - None - }; - - let new_blob = if !new_file.id().is_zero() { - Some(repo.find_blob(new_file.id())?) - } else { - None - }; - - // Generate patch using Git's diff algorithm with context - let mut diff_opts = git2::DiffOptions::new(); - diff_opts.context_lines(10); // Include 10 lines of context around changes - diff_opts.interhunk_lines(0); // Don't merge hunks - - let patch = match (old_blob.as_ref(), new_blob.as_ref()) { - (Some(old_b), Some(new_b)) => git2::Patch::from_blobs( - old_b, - Some(Path::new(file_path)), - new_b, - Some(Path::new(file_path)), - Some(&mut diff_opts), - )?, - (None, Some(new_b)) => { - // File was added - diff from empty buffer to new blob content - git2::Patch::from_buffers( - &[], // empty buffer represents the "old" version (file didn't exist) - Some(Path::new(file_path)), - new_b.content(), // new blob content as buffer - Some(Path::new(file_path)), - Some(&mut diff_opts), - )? - } - (Some(old_b), None) => { - // File was deleted - diff from old blob to empty buffer - git2::Patch::from_blob_and_buffer( - old_b, - Some(Path::new(file_path)), - &[], - Some(Path::new(file_path)), - Some(&mut diff_opts), - )? - } - (None, None) => { - // No change, shouldn't happen - return Ok(chunks); - } - }; - - // Process the patch hunks - for hunk_idx in 0..patch.num_hunks() { - let (_hunk, hunk_lines) = patch.hunk(hunk_idx)?; - - // Process each line in the hunk - for line_idx in 0..hunk_lines { - let line = patch.line_in_hunk(hunk_idx, line_idx)?; - let content = String::from_utf8_lossy(line.content()).to_string(); - - let chunk_type = match line.origin() { - ' ' => DiffChunkType::Equal, - '+' => DiffChunkType::Insert, - '-' => DiffChunkType::Delete, - _ => continue, // Skip other line types (like context headers) - }; - - chunks.push(DiffChunk { - chunk_type, - content, - }); - } - } - - Ok(chunks) } /// Get the branch status for this task attempt @@ -2123,48 +680,14 @@ impl TaskAttempt { task_id: Uuid, project_id: Uuid, ) -> Result { - // ── fetch the task attempt ─────────────────────────────────────────────────── - let attempt = sqlx::query_as!( - TaskAttempt, - r#" - SELECT ta.id AS "id!: Uuid", - ta.task_id AS "task_id!: Uuid", - ta.worktree_path, - ta.branch, - ta.base_branch, - ta.merge_commit, - ta.executor, - ta.pr_url, - ta.pr_number, - ta.pr_status, - ta.pr_merged_at AS "pr_merged_at: DateTime", - ta.worktree_deleted as "worktree_deleted!: bool", - ta.created_at AS "created_at!: DateTime", - ta.updated_at AS "updated_at!: DateTime" - FROM task_attempts ta - JOIN tasks t ON ta.task_id = t.id - WHERE ta.id = $1 - AND t.id = $2 - AND t.project_id = $3 - "#, - attempt_id, - task_id, - project_id - ) - .fetch_optional(pool) - .await? - .ok_or(TaskAttemptError::TaskNotFound)?; + // Load context with full validation + let ctx = TaskAttempt::load_context(pool, attempt_id, task_id, project_id).await?; - // ── fetch the owning project & open its repository ─────────────────────────── - let project = Project::find_by_id(pool, project_id) - .await? - .ok_or(TaskAttemptError::ProjectNotFound)?; - - use git2::{BranchType, Repository, Status, StatusOptions}; + use git2::{Status, StatusOptions}; // Ensure worktree exists (recreate if needed for cold task support) - let main_repo = Repository::open(&project.git_repo_path)?; - let attempt_branch = attempt.branch.clone(); + let main_repo = Repository::open(&ctx.project.git_repo_path)?; + let attempt_branch = ctx.task_attempt.branch.clone(); // ── locate the commit pointed to by the attempt branch ─────────────────────── let attempt_ref = main_repo @@ -2174,7 +697,7 @@ impl TaskAttempt { let attempt_oid = attempt_ref.target().unwrap(); // ── determine the base branch & ahead/behind counts ───────────────────────── - let base_branch_name = attempt.base_branch.clone(); + let base_branch_name = ctx.task_attempt.base_branch.clone(); // 1. prefer the branch’s configured upstream, if any if let Ok(local_branch) = main_repo.find_branch(&attempt_branch, BranchType::Local) { @@ -2203,7 +726,7 @@ impl TaskAttempt { }; // ── detect any uncommitted / untracked changes ─────────────────────────────── - let repo_for_status = Repository::open(&project.git_repo_path)?; + let repo_for_status = Repository::open(&ctx.project.git_repo_path)?; let mut status_opts = StatusOptions::new(); status_opts @@ -2222,7 +745,7 @@ impl TaskAttempt { commits_behind, commits_ahead, up_to_date: commits_behind == 0 && commits_ahead == 0, - merged: attempt.merge_commit.is_some(), + merged: ctx.task_attempt.merge_commit.is_some(), has_uncommitted_changes, base_branch_name, }) @@ -2236,41 +759,12 @@ impl TaskAttempt { project_id: Uuid, new_base_branch: Option, ) -> Result { - // Get the task attempt with validation - let attempt = sqlx::query_as!( - TaskAttempt, - r#"SELECT ta.id AS "id!: Uuid", - ta.task_id AS "task_id!: Uuid", - ta.worktree_path, - ta.branch, - ta.base_branch, - ta.merge_commit, - ta.executor, - ta.pr_url, - ta.pr_number, - ta.pr_status, - ta.pr_merged_at AS "pr_merged_at: DateTime", - ta.worktree_deleted AS "worktree_deleted!: bool", - ta.created_at AS "created_at!: DateTime", - ta.updated_at AS "updated_at!: DateTime" - FROM task_attempts ta - JOIN tasks t ON ta.task_id = t.id - WHERE ta.id = $1 AND t.id = $2 AND t.project_id = $3"#, - attempt_id, - task_id, - project_id - ) - .fetch_optional(pool) - .await? - .ok_or(TaskAttemptError::TaskNotFound)?; - - // Get the project - let project = Project::find_by_id(pool, project_id) - .await? - .ok_or(TaskAttemptError::ProjectNotFound)?; + // Load context with full validation + let ctx = TaskAttempt::load_context(pool, attempt_id, task_id, project_id).await?; // Use the stored base branch if no new base branch is provided - let effective_base_branch = new_base_branch.or_else(|| Some(attempt.base_branch.clone())); + let effective_base_branch = + new_base_branch.or_else(|| Some(ctx.task_attempt.base_branch.clone())); // Ensure worktree exists (recreate if needed for cold task support) let worktree_path = @@ -2279,7 +773,7 @@ impl TaskAttempt { // Perform the git rebase operations (synchronous) let new_base_commit = Self::perform_rebase_operation( &worktree_path, - &project.git_repo_path, + &ctx.project.git_repo_path, effective_base_branch, )?; @@ -2295,86 +789,21 @@ impl TaskAttempt { project_id: Uuid, file_path: &str, ) -> Result { - // Get the task attempt with validation - let _attempt = sqlx::query_as!( - TaskAttempt, - r#"SELECT ta.id AS "id!: Uuid", - ta.task_id AS "task_id!: Uuid", - ta.worktree_path, - ta.branch, - ta.base_branch, - ta.merge_commit, - ta.executor, - ta.pr_url, - ta.pr_number, - ta.pr_status, - ta.pr_merged_at AS "pr_merged_at: DateTime", - ta.worktree_deleted AS "worktree_deleted!: bool", - ta.created_at AS "created_at!: DateTime", - ta.updated_at AS "updated_at!: DateTime" - FROM task_attempts ta - JOIN tasks t ON ta.task_id = t.id - WHERE ta.id = $1 AND t.id = $2 AND t.project_id = $3"#, - attempt_id, - task_id, - project_id - ) - .fetch_optional(pool) - .await? - .ok_or(TaskAttemptError::TaskNotFound)?; + // Load context with full validation + let ctx = TaskAttempt::load_context(pool, attempt_id, task_id, project_id).await?; // Ensure worktree exists (recreate if needed for cold task support) let worktree_path_str = Self::ensure_worktree_exists(pool, attempt_id, project_id, "delete file").await?; - // Open the worktree repository - let repo = Repository::open(&worktree_path_str)?; + // Create GitService instance + let git_service = GitService::new(&ctx.project.git_repo_path)?; - // Get the absolute path to the file within the worktree - let worktree_path = Path::new(&worktree_path_str); - let file_full_path = worktree_path.join(file_path); + // Use GitService to delete file and commit + let commit_id = + git_service.delete_file_and_commit(Path::new(&worktree_path_str), file_path)?; - // Check if file exists and delete it - if file_full_path.exists() { - std::fs::remove_file(&file_full_path).map_err(|e| { - TaskAttemptError::Git(GitError::from_str(&format!( - "Failed to delete file {}: {}", - file_path, e - ))) - })?; - - debug!("Deleted file: {}", file_path); - } else { - info!("File {} does not exist, skipping deletion", file_path); - } - - // Stage the deletion - let mut index = repo.index()?; - index.remove_path(Path::new(file_path))?; - index.write()?; - - // Create a commit for the file deletion - let signature = repo.signature()?; - let tree_id = index.write_tree()?; - let tree = repo.find_tree(tree_id)?; - - // Get the current HEAD commit - let head = repo.head()?; - let parent_commit = head.peel_to_commit()?; - - let commit_message = format!("Delete file: {}", file_path); - let commit_id = repo.commit( - Some("HEAD"), - &signature, - &signature, - &commit_message, - &tree, - &[&parent_commit], - )?; - - info!("File {} deleted and committed: {}", file_path, commit_id); - - Ok(commit_id.to_string()) + Ok(commit_id) } /// Create a GitHub PR for this task attempt @@ -2382,239 +811,70 @@ impl TaskAttempt { pool: &SqlitePool, params: CreatePrParams<'_>, ) -> Result { - // Get the task attempt with validation - let attempt = sqlx::query_as!( - TaskAttempt, - r#"SELECT ta.id AS "id!: Uuid", - ta.task_id AS "task_id!: Uuid", - ta.worktree_path, - ta.branch, - ta.base_branch, - ta.merge_commit, - ta.executor, - ta.pr_url, - ta.pr_number, - ta.pr_status, - ta.pr_merged_at AS "pr_merged_at: DateTime", - ta.worktree_deleted AS "worktree_deleted!: bool", - ta.created_at AS "created_at!: DateTime", - ta.updated_at AS "updated_at!: DateTime" - FROM task_attempts ta - JOIN tasks t ON ta.task_id = t.id - WHERE ta.id = $1 AND t.id = $2 AND t.project_id = $3"#, - params.attempt_id, - params.task_id, - params.project_id - ) - .fetch_optional(pool) - .await? - .ok_or(TaskAttemptError::TaskNotFound)?; - - // Get the project to access the repository path - let project = Project::find_by_id(pool, params.project_id) - .await? - .ok_or(TaskAttemptError::ProjectNotFound)?; + // Load context with full validation + let ctx = + TaskAttempt::load_context(pool, params.attempt_id, params.task_id, params.project_id) + .await?; // Ensure worktree exists (recreate if needed for cold task support) let worktree_path = Self::ensure_worktree_exists(pool, params.attempt_id, params.project_id, "GitHub PR") .await?; - // Extract GitHub repository information from the project path - let (owner, repo_name) = Self::extract_github_repo_info(&project.git_repo_path)?; + // Create GitHub service instance + let github_service = GitHubService::new(params.github_token)?; + + // Use GitService to get the remote URL, then create GitHubRepoInfo + let git_service = GitService::new(&ctx.project.git_repo_path)?; + let (owner, repo_name) = git_service + .get_github_repo_info() + .map_err(|e| TaskAttemptError::ValidationError(e.to_string()))?; + let repo_info = GitHubRepoInfo { owner, repo_name }; // Push the branch to GitHub first - Self::push_branch_to_github(&worktree_path, &attempt.branch, params.github_token)?; - - // Create the PR using Octocrab - let pr_url = Self::create_pr_with_octocrab( + Self::push_branch_to_github( + &ctx.project.git_repo_path, + &worktree_path, + &ctx.task_attempt.branch, params.github_token, - &owner, - &repo_name, - &attempt.branch, - params.base_branch.unwrap_or("main"), - params.title, - params.body, - ) - .await?; + )?; - // Extract PR number from URL (GitHub URLs are in format: https://github.com/owner/repo/pull/123) - let pr_number = pr_url - .split('/') - .next_back() - .and_then(|n| n.parse::().ok()); + // Create the PR using GitHub service + let pr_request = CreatePrRequest { + title: params.title.to_string(), + body: params.body.map(|s| s.to_string()), + head_branch: ctx.task_attempt.branch.clone(), + base_branch: params.base_branch.unwrap_or("main").to_string(), + }; + + let pr_info = github_service.create_pr(&repo_info, &pr_request).await?; // Update the task attempt with PR information sqlx::query!( "UPDATE task_attempts SET pr_url = $1, pr_number = $2, pr_status = $3, updated_at = datetime('now') WHERE id = $4", - pr_url, - pr_number, - "open", + pr_info.url, + pr_info.number, + pr_info.status, params.attempt_id ) .execute(pool) .await?; - Ok(pr_url) - } - - /// Extract GitHub owner and repo name from git repo path - fn extract_github_repo_info(git_repo_path: &str) -> Result<(String, String), TaskAttemptError> { - // Try to extract from remote origin URL - let repo = Repository::open(git_repo_path)?; - let remote = repo.find_remote("origin").map_err(|_| { - TaskAttemptError::ValidationError("No 'origin' remote found".to_string()) - })?; - - let url = remote.url().ok_or_else(|| { - TaskAttemptError::ValidationError("Remote origin has no URL".to_string()) - })?; - - // Parse GitHub URL (supports both HTTPS and SSH formats) - let github_regex = regex::Regex::new(r"github\.com[:/]([^/]+)/(.+?)(?:\.git)?/?$") - .map_err(|e| TaskAttemptError::ValidationError(format!("Regex error: {}", e)))?; - - if let Some(captures) = github_regex.captures(url) { - let owner = captures.get(1).unwrap().as_str().to_string(); - let repo_name = captures.get(2).unwrap().as_str().to_string(); - Ok((owner, repo_name)) - } else { - Err(TaskAttemptError::ValidationError(format!( - "Not a GitHub repository: {}", - url - ))) - } + Ok(pr_info.url) } /// Push the branch to GitHub remote fn push_branch_to_github( + git_repo_path: &str, worktree_path: &str, branch_name: &str, github_token: &str, ) -> Result<(), TaskAttemptError> { - let repo = Repository::open(worktree_path)?; - - // Get the remote - let remote = repo.find_remote("origin")?; - let remote_url = remote.url().ok_or_else(|| { - TaskAttemptError::ValidationError("Remote origin has no URL".to_string()) - })?; - - // Convert SSH URL to HTTPS URL if necessary - let https_url = if remote_url.starts_with("git@github.com:") { - // Convert git@github.com:owner/repo.git to https://github.com/owner/repo.git - remote_url.replace("git@github.com:", "https://github.com/") - } else if remote_url.starts_with("ssh://git@github.com/") { - // Convert ssh://git@github.com/owner/repo.git to https://github.com/owner/repo.git - remote_url.replace("ssh://git@github.com/", "https://github.com/") - } else { - remote_url.to_string() - }; - - // Create a temporary remote with HTTPS URL for pushing - let temp_remote_name = "temp_https_origin"; - - // Remove any existing temp remote - let _ = repo.remote_delete(temp_remote_name); - - // Create temporary HTTPS remote - let mut temp_remote = repo.remote(temp_remote_name, &https_url)?; - - // Create refspec for pushing the branch - let refspec = format!("refs/heads/{}:refs/heads/{}", branch_name, branch_name); - - // Set up authentication callback using the GitHub token - let mut callbacks = git2::RemoteCallbacks::new(); - callbacks.credentials(|_url, username_from_url, _allowed_types| { - git2::Cred::userpass_plaintext(username_from_url.unwrap_or("git"), github_token) - }); - - // Configure push options - let mut push_options = git2::PushOptions::new(); - push_options.remote_callbacks(callbacks); - - // Push the branch - let push_result = temp_remote.push(&[&refspec], Some(&mut push_options)); - - // Clean up the temporary remote - let _ = repo.remote_delete(temp_remote_name); - - // Check push result - push_result.map_err(TaskAttemptError::Git)?; - - info!("Pushed branch {} to GitHub using HTTPS", branch_name); - Ok(()) - } - - /// Create a PR using Octocrab - async fn create_pr_with_octocrab( - github_token: &str, - owner: &str, - repo_name: &str, - head_branch: &str, - base_branch: &str, - title: &str, - body: Option<&str>, - ) -> Result { - let octocrab = octocrab::OctocrabBuilder::new() - .personal_token(github_token.to_string()) - .build() - .map_err(|e| { - TaskAttemptError::ValidationError(format!("Failed to create GitHub client: {}", e)) - })?; - - // Verify repository access - octocrab.repos(owner, repo_name).get().await.map_err(|e| { - TaskAttemptError::ValidationError(format!( - "Cannot access repository {}/{}: {}", - owner, repo_name, e - )) - })?; - - // Check if the base branch exists - octocrab - .repos(owner, repo_name) - .get_ref(&octocrab::params::repos::Reference::Branch( - base_branch.to_string(), - )) - .await - .map_err(|e| { - TaskAttemptError::ValidationError(format!( - "Base branch '{}' does not exist: {}", - base_branch, e - )) - })?; - - // Check if the head branch exists - octocrab.repos(owner, repo_name) - .get_ref(&octocrab::params::repos::Reference::Branch(head_branch.to_string())).await - .map_err(|e| TaskAttemptError::ValidationError(format!("Head branch '{}' does not exist. Make sure the branch was pushed successfully: {}", head_branch, e)))?; - - let pr = octocrab - .pulls(owner, repo_name) - .create(title, head_branch, base_branch) - .body(body.unwrap_or("")) - .send() - .await - .map_err(|e| match e { - octocrab::Error::GitHub { source, .. } => { - TaskAttemptError::ValidationError(format!( - "GitHub API error: {} (status: {})", - source.message, - source.status_code.as_u16() - )) - } - _ => TaskAttemptError::ValidationError(format!("Failed to create PR: {}", e)), - })?; - - info!( - "Created GitHub PR #{} for branch {}", - pr.number, head_branch - ); - Ok(pr - .html_url - .map(|url| url.to_string()) - .unwrap_or_else(|| "".to_string())) + // Use GitService to push to GitHub + let git_service = GitService::new(git_repo_path)?; + git_service + .push_to_github(Path::new(worktree_path), branch_name, github_token) + .map_err(TaskAttemptError::from) } /// Update PR status and merge commit @@ -2645,40 +905,11 @@ impl TaskAttempt { task_id: Uuid, project_id: Uuid, ) -> Result { - // Get the task attempt with validation - let _attempt = sqlx::query_as!( - TaskAttempt, - r#"SELECT ta.id AS "id!: Uuid", - ta.task_id AS "task_id!: Uuid", - ta.worktree_path, - ta.branch, - ta.base_branch, - ta.merge_commit, - ta.executor, - ta.pr_url, - ta.pr_number, - ta.pr_status, - ta.pr_merged_at AS "pr_merged_at: DateTime", - ta.worktree_deleted AS "worktree_deleted!: bool", - ta.created_at AS "created_at!: DateTime", - ta.updated_at AS "updated_at!: DateTime" - FROM task_attempts ta - JOIN tasks t ON ta.task_id = t.id - WHERE ta.id = $1 AND t.id = $2 AND t.project_id = $3"#, - attempt_id, - task_id, - project_id - ) - .fetch_optional(pool) - .await? - .ok_or(TaskAttemptError::TaskNotFound)?; + // Load context with full validation + let ctx = TaskAttempt::load_context(pool, attempt_id, task_id, project_id).await?; - // Get the project to check if it has a setup script - let project = Project::find_by_id(pool, project_id) - .await? - .ok_or(TaskAttemptError::ProjectNotFound)?; - - let has_setup_script = project + let has_setup_script = ctx + .project .setup_script .as_ref() .map(|script| !script.trim().is_empty()) diff --git a/backend/src/services/git_service.rs b/backend/src/services/git_service.rs new file mode 100644 index 00000000..da3b03f5 --- /dev/null +++ b/backend/src/services/git_service.rs @@ -0,0 +1,1024 @@ +use std::path::{Path, PathBuf}; + +use git2::{ + BranchType, DiffOptions, Error as GitError, RebaseOptions, Repository, WorktreeAddOptions, +}; +use regex; +use tracing::{debug, info}; + +use crate::{ + models::task_attempt::{DiffChunk, DiffChunkType, FileDiff, WorktreeDiff}, + utils::worktree_manager::WorktreeManager, +}; + +#[derive(Debug)] +pub enum GitServiceError { + Git(GitError), + IoError(std::io::Error), + InvalidRepository(String), + BranchNotFound(String), + + MergeConflicts(String), + InvalidPath(String), +} + +impl std::fmt::Display for GitServiceError { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + match self { + GitServiceError::Git(e) => write!(f, "Git error: {}", e), + GitServiceError::IoError(e) => write!(f, "IO error: {}", e), + GitServiceError::InvalidRepository(e) => write!(f, "Invalid repository: {}", e), + GitServiceError::BranchNotFound(e) => write!(f, "Branch not found: {}", e), + + GitServiceError::MergeConflicts(e) => write!(f, "Merge conflicts: {}", e), + GitServiceError::InvalidPath(e) => write!(f, "Invalid path: {}", e), + } + } +} + +impl std::error::Error for GitServiceError {} + +impl From for GitServiceError { + fn from(err: GitError) -> Self { + GitServiceError::Git(err) + } +} + +impl From for GitServiceError { + fn from(err: std::io::Error) -> Self { + GitServiceError::IoError(err) + } +} + +/// Service for managing Git operations in task execution workflows +pub struct GitService { + repo_path: PathBuf, +} + +impl GitService { + /// Create a new GitService for the given repository path + pub fn new>(repo_path: P) -> Result { + let repo_path = repo_path.as_ref().to_path_buf(); + + // Validate that the path exists and is a git repository + if !repo_path.exists() { + return Err(GitServiceError::InvalidPath(format!( + "Repository path does not exist: {}", + repo_path.display() + ))); + } + + // Try to open the repository to validate it + Repository::open(&repo_path).map_err(|e| { + GitServiceError::InvalidRepository(format!( + "Failed to open repository at {}: {}", + repo_path.display(), + e + )) + })?; + + Ok(Self { repo_path }) + } + + /// Open the repository + fn open_repo(&self) -> Result { + Repository::open(&self.repo_path).map_err(GitServiceError::from) + } + + /// Create a worktree with a new branch + pub fn create_worktree( + &self, + branch_name: &str, + worktree_path: &Path, + base_branch: Option<&str>, + ) -> Result<(), GitServiceError> { + let repo = self.open_repo()?; + + // Ensure parent directory exists + if let Some(parent) = worktree_path.parent() { + std::fs::create_dir_all(parent)?; + } + + // Choose base reference + let base_reference = if let Some(base_branch) = base_branch { + let branch = repo + .find_branch(base_branch, BranchType::Local) + .map_err(|_| GitServiceError::BranchNotFound(base_branch.to_string()))?; + branch.into_reference() + } else { + // Handle new repositories without any commits + match repo.head() { + Ok(head_ref) => head_ref, + Err(e) + if e.class() == git2::ErrorClass::Reference + && e.code() == git2::ErrorCode::UnbornBranch => + { + // Repository has no commits yet, create an initial commit + self.create_initial_commit(&repo)?; + repo.find_reference("refs/heads/main")? + } + Err(e) => return Err(e.into()), + } + }; + + // Create branch + repo.branch(branch_name, &base_reference.peel_to_commit()?, false)?; + + let branch = repo.find_branch(branch_name, BranchType::Local)?; + let branch_ref = branch.into_reference(); + let mut worktree_opts = WorktreeAddOptions::new(); + worktree_opts.reference(Some(&branch_ref)); + + // Create the worktree at the specified path + repo.worktree(branch_name, worktree_path, Some(&worktree_opts))?; + + info!( + "Created worktree '{}' at path: {}", + branch_name, + worktree_path.display() + ); + Ok(()) + } + + /// Create an initial commit for empty repositories + fn create_initial_commit(&self, repo: &Repository) -> Result<(), GitServiceError> { + let signature = repo.signature().unwrap_or_else(|_| { + // Fallback if no Git config is set + git2::Signature::now("Vibe Kanban", "noreply@vibekanban.com") + .expect("Failed to create fallback signature") + }); + + let tree_id = { + let tree_builder = repo.treebuilder(None)?; + tree_builder.write()? + }; + let tree = repo.find_tree(tree_id)?; + + // Create initial commit on main branch + let _commit_id = repo.commit( + Some("refs/heads/main"), + &signature, + &signature, + "Initial commit", + &tree, + &[], + )?; + + // Set HEAD to point to main branch + repo.set_head("refs/heads/main")?; + + info!("Created initial commit for empty repository"); + Ok(()) + } + + /// Merge changes from a worktree branch back to the main repository + pub fn merge_changes( + &self, + worktree_path: &Path, + branch_name: &str, + task_title: &str, + ) -> Result { + let main_repo = self.open_repo()?; + + // Open the worktree repository to get the latest commit + let worktree_repo = Repository::open(worktree_path)?; + let worktree_head = worktree_repo.head()?; + let worktree_commit = worktree_head.peel_to_commit()?; + + // Verify the branch exists in the main repo + main_repo + .find_branch(branch_name, BranchType::Local) + .map_err(|_| GitServiceError::BranchNotFound(branch_name.to_string()))?; + + // Get the current HEAD of the main repo (usually main/master) + let main_head = main_repo.head()?; + let main_commit = main_head.peel_to_commit()?; + + // Get the signature for the merge commit + let signature = main_repo.signature()?; + + // Get the tree from the worktree commit and find it in the main repo + let worktree_tree_id = worktree_commit.tree_id(); + let main_tree = main_repo.find_tree(worktree_tree_id)?; + + // Find the worktree commit in the main repo + let main_worktree_commit = main_repo.find_commit(worktree_commit.id())?; + + // Create a merge commit + let merge_commit_id = main_repo.commit( + Some("HEAD"), // Update HEAD + &signature, // Author + &signature, // Committer + &format!("Merge: {} (vibe-kanban)", task_title), // Message using task title + &main_tree, // Use the tree from main repo + &[&main_commit, &main_worktree_commit], // Parents: main HEAD and worktree commit + )?; + + info!("Created merge commit: {}", merge_commit_id); + Ok(merge_commit_id.to_string()) + } + + /// Rebase a worktree branch onto a new base + pub fn rebase_branch( + &self, + worktree_path: &Path, + new_base_branch: Option<&str>, + ) -> Result { + let worktree_repo = Repository::open(worktree_path)?; + let main_repo = self.open_repo()?; + + // Get the target base branch reference + let base_branch_name = match new_base_branch { + Some(branch) => branch.to_string(), + None => main_repo + .head() + .ok() + .and_then(|head| head.shorthand().map(|s| s.to_string())) + .unwrap_or_else(|| "main".to_string()), + }; + let base_branch_name = base_branch_name.as_str(); + + // Check if the specified base branch exists in the main repo + let base_branch = main_repo + .find_branch(base_branch_name, BranchType::Local) + .map_err(|_| GitServiceError::BranchNotFound(base_branch_name.to_string()))?; + + let base_commit_id = base_branch.get().peel_to_commit()?.id(); + + // Get the HEAD commit of the worktree (the changes to rebase) + let head = worktree_repo.head()?; + + // Set up rebase + let mut rebase_opts = RebaseOptions::new(); + let signature = worktree_repo.signature()?; + + // Start the rebase + let head_annotated = worktree_repo.reference_to_annotated_commit(&head)?; + let base_annotated = worktree_repo.find_annotated_commit(base_commit_id)?; + + let mut rebase = worktree_repo.rebase( + Some(&head_annotated), + Some(&base_annotated), + None, // onto (use upstream if None) + Some(&mut rebase_opts), + )?; + + // Process each rebase operation + while let Some(operation) = rebase.next() { + let _operation = operation?; + + // Check for conflicts + let index = worktree_repo.index()?; + if index.has_conflicts() { + // For now, abort the rebase on conflicts + rebase.abort()?; + return Err(GitServiceError::MergeConflicts( + "Rebase failed due to conflicts. Please resolve conflicts manually." + .to_string(), + )); + } + + // Commit the rebased operation + rebase.commit(None, &signature, None)?; + } + + // Finish the rebase + rebase.finish(None)?; + + // Get the final commit ID after rebase + let final_head = worktree_repo.head()?; + let final_commit = final_head.peel_to_commit()?; + + info!("Rebase completed. New HEAD: {}", final_commit.id()); + Ok(final_commit.id().to_string()) + } + + /// Get enhanced diff for task attempts (from merge commit or worktree) + pub fn get_enhanced_diff( + &self, + worktree_path: &Path, + merge_commit_id: Option<&str>, + ) -> Result { + let mut files = Vec::new(); + + if let Some(merge_commit_id) = merge_commit_id { + // Task attempt has been merged - show the diff from the merge commit + self.get_merged_diff(merge_commit_id, &mut files)?; + } else { + // Task attempt not yet merged - get worktree diff + self.get_worktree_diff(worktree_path, &mut files)?; + } + + Ok(WorktreeDiff { files }) + } + + /// Get diff from a merge commit + fn get_merged_diff( + &self, + merge_commit_id: &str, + files: &mut Vec, + ) -> Result<(), GitServiceError> { + let main_repo = self.open_repo()?; + let merge_commit = main_repo.find_commit(git2::Oid::from_str(merge_commit_id)?)?; + + // A merge commit has multiple parents - first parent is the main branch before merge, + // second parent is the branch that was merged + let parents: Vec<_> = merge_commit.parents().collect(); + + // Create diff options with more context + let mut diff_opts = DiffOptions::new(); + diff_opts.context_lines(10); + diff_opts.interhunk_lines(0); + + let diff = if parents.len() >= 2 { + let base_tree = parents[0].tree()?; + let merged_tree = parents[1].tree()?; + main_repo.diff_tree_to_tree( + Some(&base_tree), + Some(&merged_tree), + Some(&mut diff_opts), + )? + } else { + // Fast-forward merge or single parent + let base_tree = if !parents.is_empty() { + parents[0].tree()? + } else { + main_repo.find_tree(git2::Oid::zero())? + }; + let merged_tree = merge_commit.tree()?; + main_repo.diff_tree_to_tree( + Some(&base_tree), + Some(&merged_tree), + Some(&mut diff_opts), + )? + }; + + // Process each diff delta + diff.foreach( + &mut |delta, _progress| { + if let Some(path_str) = delta.new_file().path().and_then(|p| p.to_str()) { + let old_file = delta.old_file(); + let new_file = delta.new_file(); + + if let Ok(diff_chunks) = + self.generate_git_diff_chunks(&main_repo, &old_file, &new_file, path_str) + { + if !diff_chunks.is_empty() { + files.push(FileDiff { + path: path_str.to_string(), + chunks: diff_chunks, + }); + } else if delta.status() == git2::Delta::Added + || delta.status() == git2::Delta::Deleted + { + files.push(FileDiff { + path: path_str.to_string(), + chunks: vec![DiffChunk { + chunk_type: if delta.status() == git2::Delta::Added { + DiffChunkType::Insert + } else { + DiffChunkType::Delete + }, + content: format!( + "{} file", + if delta.status() == git2::Delta::Added { + "Added" + } else { + "Deleted" + } + ), + }], + }); + } + } + } + true + }, + None, + None, + None, + )?; + + Ok(()) + } + + /// Get diff for a worktree (before merge) + fn get_worktree_diff( + &self, + worktree_path: &Path, + files: &mut Vec, + ) -> Result<(), GitServiceError> { + let worktree_repo = Repository::open(worktree_path)?; + let main_repo = self.open_repo()?; + let main_head_oid = main_repo.head()?.peel_to_commit()?.id(); + + // Get the current worktree HEAD commit + let worktree_head = worktree_repo.head()?; + let worktree_head_oid = worktree_head.peel_to_commit()?.id(); + + // Find the merge base (common ancestor) + let base_oid = worktree_repo.merge_base(main_head_oid, worktree_head_oid)?; + let base_commit = worktree_repo.find_commit(base_oid)?; + let base_tree = base_commit.tree()?; + + // Get the current tree from the worktree HEAD commit + let current_commit = worktree_repo.find_commit(worktree_head_oid)?; + let current_tree = current_commit.tree()?; + + // Create a diff between the base tree and current tree + let mut diff_opts = DiffOptions::new(); + diff_opts.context_lines(10); + diff_opts.interhunk_lines(0); + + let diff = worktree_repo.diff_tree_to_tree( + Some(&base_tree), + Some(¤t_tree), + Some(&mut diff_opts), + )?; + + // Process committed changes + diff.foreach( + &mut |delta, _progress| { + if let Some(path_str) = delta.new_file().path().and_then(|p| p.to_str()) { + let old_file = delta.old_file(); + let new_file = delta.new_file(); + + if let Ok(diff_chunks) = self.generate_git_diff_chunks( + &worktree_repo, + &old_file, + &new_file, + path_str, + ) { + if !diff_chunks.is_empty() { + files.push(FileDiff { + path: path_str.to_string(), + chunks: diff_chunks, + }); + } else if delta.status() == git2::Delta::Added + || delta.status() == git2::Delta::Deleted + { + files.push(FileDiff { + path: path_str.to_string(), + chunks: vec![DiffChunk { + chunk_type: if delta.status() == git2::Delta::Added { + DiffChunkType::Insert + } else { + DiffChunkType::Delete + }, + content: format!( + "{} file", + if delta.status() == git2::Delta::Added { + "Added" + } else { + "Deleted" + } + ), + }], + }); + } + } + } + true + }, + None, + None, + None, + )?; + + // Also get unstaged changes (working directory changes) + let current_tree = worktree_repo.head()?.peel_to_tree()?; + + let mut unstaged_diff_opts = DiffOptions::new(); + unstaged_diff_opts.context_lines(10); + unstaged_diff_opts.interhunk_lines(0); + unstaged_diff_opts.include_untracked(true); + + let unstaged_diff = worktree_repo + .diff_tree_to_workdir_with_index(Some(¤t_tree), Some(&mut unstaged_diff_opts))?; + + // Process unstaged changes + unstaged_diff.foreach( + &mut |delta, _progress| { + if let Some(path_str) = delta.new_file().path().and_then(|p| p.to_str()) { + if let Err(e) = self.process_unstaged_file( + files, + &worktree_repo, + base_oid, + worktree_path, + path_str, + &delta, + ) { + eprintln!("Error processing unstaged file {}: {:?}", path_str, e); + } + } + true + }, + None, + None, + None, + )?; + + Ok(()) + } + + /// Generate diff chunks using Git's native diff algorithm + fn generate_git_diff_chunks( + &self, + repo: &Repository, + old_file: &git2::DiffFile, + new_file: &git2::DiffFile, + file_path: &str, + ) -> Result, GitServiceError> { + let mut chunks = Vec::new(); + + // Create a patch for the single file using Git's native diff + let old_blob = if !old_file.id().is_zero() { + Some(repo.find_blob(old_file.id())?) + } else { + None + }; + + let new_blob = if !new_file.id().is_zero() { + Some(repo.find_blob(new_file.id())?) + } else { + None + }; + + // Generate patch using Git's diff algorithm + let mut diff_opts = DiffOptions::new(); + diff_opts.context_lines(10); + diff_opts.interhunk_lines(0); + + let patch = match (old_blob.as_ref(), new_blob.as_ref()) { + (Some(old_b), Some(new_b)) => git2::Patch::from_blobs( + old_b, + Some(Path::new(file_path)), + new_b, + Some(Path::new(file_path)), + Some(&mut diff_opts), + )?, + (None, Some(new_b)) => git2::Patch::from_buffers( + &[], + Some(Path::new(file_path)), + new_b.content(), + Some(Path::new(file_path)), + Some(&mut diff_opts), + )?, + (Some(old_b), None) => git2::Patch::from_blob_and_buffer( + old_b, + Some(Path::new(file_path)), + &[], + Some(Path::new(file_path)), + Some(&mut diff_opts), + )?, + (None, None) => { + return Ok(chunks); + } + }; + + // Process the patch hunks + for hunk_idx in 0..patch.num_hunks() { + let (_hunk, hunk_lines) = patch.hunk(hunk_idx)?; + + for line_idx in 0..hunk_lines { + let line = patch.line_in_hunk(hunk_idx, line_idx)?; + let content = String::from_utf8_lossy(line.content()).to_string(); + + let chunk_type = match line.origin() { + ' ' => DiffChunkType::Equal, + '+' => DiffChunkType::Insert, + '-' => DiffChunkType::Delete, + _ => continue, + }; + + chunks.push(DiffChunk { + chunk_type, + content, + }); + } + } + + Ok(chunks) + } + + /// Process unstaged file changes + fn process_unstaged_file( + &self, + files: &mut Vec, + worktree_repo: &Repository, + base_oid: git2::Oid, + worktree_path: &Path, + path_str: &str, + delta: &git2::DiffDelta, + ) -> Result<(), GitServiceError> { + // Check if we already have a diff for this file from committed changes + if let Some(existing_file) = files.iter_mut().find(|f| f.path == path_str) { + // File already has committed changes, create a combined diff + let base_content = self.get_base_file_content(worktree_repo, base_oid, path_str)?; + let working_content = self.get_working_file_content(worktree_path, path_str, delta)?; + + if base_content != working_content { + if let Ok(combined_chunks) = + self.create_combined_diff_chunks(&base_content, &working_content, path_str) + { + existing_file.chunks = combined_chunks; + } + } + } else { + // File only has unstaged changes + let base_content = self.get_base_file_content(worktree_repo, base_oid, path_str)?; + let working_content = self.get_working_file_content(worktree_path, path_str, delta)?; + + if base_content != working_content || delta.status() != git2::Delta::Modified { + if let Ok(chunks) = + self.create_combined_diff_chunks(&base_content, &working_content, path_str) + { + if !chunks.is_empty() { + files.push(FileDiff { + path: path_str.to_string(), + chunks, + }); + } + } else if delta.status() != git2::Delta::Modified { + // Fallback for added/deleted files + files.push(FileDiff { + path: path_str.to_string(), + chunks: vec![DiffChunk { + chunk_type: if delta.status() == git2::Delta::Added { + DiffChunkType::Insert + } else { + DiffChunkType::Delete + }, + content: format!( + "{} file", + if delta.status() == git2::Delta::Added { + "Added" + } else { + "Deleted" + } + ), + }], + }); + } + } + } + + Ok(()) + } + + /// Get the content of a file at the base commit + fn get_base_file_content( + &self, + repo: &Repository, + base_oid: git2::Oid, + path_str: &str, + ) -> Result { + if let Ok(base_commit) = repo.find_commit(base_oid) { + if let Ok(base_tree) = base_commit.tree() { + if let Ok(entry) = base_tree.get_path(Path::new(path_str)) { + if let Ok(blob) = repo.find_blob(entry.id()) { + return Ok(String::from_utf8_lossy(blob.content()).to_string()); + } + } + } + } + Ok(String::new()) + } + + /// Get the content of a file in the working directory + fn get_working_file_content( + &self, + worktree_path: &Path, + path_str: &str, + delta: &git2::DiffDelta, + ) -> Result { + if delta.status() != git2::Delta::Deleted { + let file_path = worktree_path.join(path_str); + std::fs::read_to_string(&file_path).map_err(GitServiceError::from) + } else { + Ok(String::new()) + } + } + + /// Create diff chunks from two text contents + fn create_combined_diff_chunks( + &self, + old_content: &str, + new_content: &str, + path_str: &str, + ) -> Result, GitServiceError> { + let mut diff_opts = DiffOptions::new(); + diff_opts.context_lines(10); + diff_opts.interhunk_lines(0); + + let patch = git2::Patch::from_buffers( + old_content.as_bytes(), + Some(Path::new(path_str)), + new_content.as_bytes(), + Some(Path::new(path_str)), + Some(&mut diff_opts), + )?; + + let mut chunks = Vec::new(); + + for hunk_idx in 0..patch.num_hunks() { + let (_hunk, hunk_lines) = patch.hunk(hunk_idx)?; + + for line_idx in 0..hunk_lines { + let line = patch.line_in_hunk(hunk_idx, line_idx)?; + let content = String::from_utf8_lossy(line.content()).to_string(); + + let chunk_type = match line.origin() { + ' ' => DiffChunkType::Equal, + '+' => DiffChunkType::Insert, + '-' => DiffChunkType::Delete, + _ => continue, + }; + + chunks.push(DiffChunk { + chunk_type, + content, + }); + } + } + + Ok(chunks) + } + + /// Delete a file from the repository and commit the change + pub fn delete_file_and_commit( + &self, + worktree_path: &Path, + file_path: &str, + ) -> Result { + let repo = Repository::open(worktree_path)?; + + // Get the absolute path to the file within the worktree + let file_full_path = worktree_path.join(file_path); + + // Check if file exists and delete it + if file_full_path.exists() { + std::fs::remove_file(&file_full_path).map_err(|e| { + GitServiceError::IoError(std::io::Error::other(format!( + "Failed to delete file {}: {}", + file_path, e + ))) + })?; + + debug!("Deleted file: {}", file_path); + } else { + info!("File {} does not exist, skipping deletion", file_path); + } + + // Stage the deletion + let mut index = repo.index()?; + index.remove_path(Path::new(file_path))?; + index.write()?; + + // Create a commit for the file deletion + let signature = repo.signature()?; + let tree_id = index.write_tree()?; + let tree = repo.find_tree(tree_id)?; + + // Get the current HEAD commit + let head = repo.head()?; + let parent_commit = head.peel_to_commit()?; + + let commit_message = format!("Delete file: {}", file_path); + let commit_id = repo.commit( + Some("HEAD"), + &signature, + &signature, + &commit_message, + &tree, + &[&parent_commit], + )?; + + info!("File {} deleted and committed: {}", file_path, commit_id); + + Ok(commit_id.to_string()) + } + + /// Get the default branch name for the repository + pub fn get_default_branch_name(&self) -> Result { + let repo = self.open_repo()?; + + let result = match repo.head() { + Ok(head_ref) => Ok(head_ref.shorthand().unwrap_or("main").to_string()), + Err(e) + if e.class() == git2::ErrorClass::Reference + && e.code() == git2::ErrorCode::UnbornBranch => + { + Ok("main".to_string()) // Repository has no commits yet + } + Err(_) => Ok("main".to_string()), // Fallback + }; + result + } + + /// Recreate a worktree from an existing branch (for cold task support) + pub async fn recreate_worktree_from_branch( + &self, + branch_name: &str, + stored_worktree_path: &Path, + ) -> Result { + let repo = self.open_repo()?; + + // Verify branch exists before proceeding + let _branch = repo + .find_branch(branch_name, BranchType::Local) + .map_err(|_| GitServiceError::BranchNotFound(branch_name.to_string()))?; + drop(_branch); + + let stored_worktree_path_str = stored_worktree_path.to_string_lossy().to_string(); + + info!( + "Recreating worktree using stored path: {} (branch: {})", + stored_worktree_path_str, branch_name + ); + + // Clean up existing directory if it exists to avoid git sync issues + if stored_worktree_path.exists() { + debug!( + "Removing existing directory before worktree recreation: {}", + stored_worktree_path_str + ); + std::fs::remove_dir_all(stored_worktree_path).map_err(|e| { + GitServiceError::IoError(std::io::Error::other(format!( + "Failed to remove existing worktree directory {}: {}", + stored_worktree_path_str, e + ))) + })?; + } + + // Ensure parent directory exists - critical for session continuity + if let Some(parent) = stored_worktree_path.parent() { + std::fs::create_dir_all(parent).map_err(|e| { + GitServiceError::IoError(std::io::Error::other(format!( + "Failed to create parent directory for worktree path {}: {}", + stored_worktree_path_str, e + ))) + })?; + } + + // Extract repository path for WorktreeManager + let repo_path = repo + .workdir() + .ok_or_else(|| { + GitServiceError::InvalidRepository( + "Repository has no working directory".to_string(), + ) + })? + .to_str() + .ok_or_else(|| { + GitServiceError::InvalidRepository("Repository path is not valid UTF-8".to_string()) + })? + .to_string(); + + WorktreeManager::ensure_worktree_exists( + repo_path, + branch_name.to_string(), + stored_worktree_path.to_path_buf(), + ) + .await + .map_err(|e| { + GitServiceError::IoError(std::io::Error::other(format!( + "WorktreeManager error: {}", + e + ))) + })?; + + info!( + "Successfully recreated worktree at original path: {} -> {}", + branch_name, stored_worktree_path_str + ); + Ok(stored_worktree_path.to_path_buf()) + } + + /// Extract GitHub owner and repo name from git repo path + pub fn get_github_repo_info(&self) -> Result<(String, String), GitServiceError> { + let repo = self.open_repo()?; + let remote = repo.find_remote("origin").map_err(|_| { + GitServiceError::InvalidRepository("No 'origin' remote found".to_string()) + })?; + + let url = remote.url().ok_or_else(|| { + GitServiceError::InvalidRepository("Remote origin has no URL".to_string()) + })?; + + // Parse GitHub URL (supports both HTTPS and SSH formats) + let github_regex = regex::Regex::new(r"github\.com[:/]([^/]+)/(.+?)(?:\.git)?/?$") + .map_err(|e| GitServiceError::InvalidRepository(format!("Regex error: {}", e)))?; + + if let Some(captures) = github_regex.captures(url) { + let owner = captures.get(1).unwrap().as_str().to_string(); + let repo_name = captures.get(2).unwrap().as_str().to_string(); + Ok((owner, repo_name)) + } else { + Err(GitServiceError::InvalidRepository(format!( + "Not a GitHub repository: {}", + url + ))) + } + } + + /// Push the branch to GitHub remote + pub fn push_to_github( + &self, + worktree_path: &Path, + branch_name: &str, + github_token: &str, + ) -> Result<(), GitServiceError> { + let repo = Repository::open(worktree_path)?; + + // Get the remote + let remote = repo.find_remote("origin")?; + let remote_url = remote.url().ok_or_else(|| { + GitServiceError::InvalidRepository("Remote origin has no URL".to_string()) + })?; + + // Convert SSH URL to HTTPS URL if necessary + let https_url = if remote_url.starts_with("git@github.com:") { + // Convert git@github.com:owner/repo.git to https://github.com/owner/repo.git + remote_url.replace("git@github.com:", "https://github.com/") + } else if remote_url.starts_with("ssh://git@github.com/") { + // Convert ssh://git@github.com/owner/repo.git to https://github.com/owner/repo.git + remote_url.replace("ssh://git@github.com/", "https://github.com/") + } else { + remote_url.to_string() + }; + + // Create a temporary remote with HTTPS URL for pushing + let temp_remote_name = "temp_https_origin"; + + // Remove any existing temp remote + let _ = repo.remote_delete(temp_remote_name); + + // Create temporary HTTPS remote + let mut temp_remote = repo.remote(temp_remote_name, &https_url)?; + + // Create refspec for pushing the branch + let refspec = format!("refs/heads/{}:refs/heads/{}", branch_name, branch_name); + + // Set up authentication callback using the GitHub token + let mut callbacks = git2::RemoteCallbacks::new(); + callbacks.credentials(|_url, username_from_url, _allowed_types| { + git2::Cred::userpass_plaintext(username_from_url.unwrap_or("git"), github_token) + }); + + // Configure push options + let mut push_options = git2::PushOptions::new(); + push_options.remote_callbacks(callbacks); + + // Push the branch + let push_result = temp_remote.push(&[&refspec], Some(&mut push_options)); + + // Clean up the temporary remote + let _ = repo.remote_delete(temp_remote_name); + + // Check push result + push_result?; + + info!("Pushed branch {} to GitHub using HTTPS", branch_name); + Ok(()) + } +} + +#[cfg(test)] +mod tests { + use tempfile::TempDir; + + use super::*; + + fn create_test_repo() -> (TempDir, Repository) { + let temp_dir = TempDir::new().unwrap(); + let repo = Repository::init(temp_dir.path()).unwrap(); + + // Configure the repository + let mut config = repo.config().unwrap(); + config.set_str("user.name", "Test User").unwrap(); + config.set_str("user.email", "test@example.com").unwrap(); + + (temp_dir, repo) + } + + #[test] + fn test_git_service_creation() { + let (temp_dir, _repo) = create_test_repo(); + let _git_service = GitService::new(temp_dir.path()).unwrap(); + } + + #[test] + fn test_invalid_repository_path() { + let result = GitService::new("/nonexistent/path"); + assert!(result.is_err()); + } + + #[test] + fn test_default_branch_name() { + let (temp_dir, _repo) = create_test_repo(); + let git_service = GitService::new(temp_dir.path()).unwrap(); + let branch_name = git_service.get_default_branch_name().unwrap(); + assert_eq!(branch_name, "main"); + } +} diff --git a/backend/src/services/github_service.rs b/backend/src/services/github_service.rs new file mode 100644 index 00000000..c44b6a00 --- /dev/null +++ b/backend/src/services/github_service.rs @@ -0,0 +1,274 @@ +use std::time::Duration; + +use octocrab::{Octocrab, OctocrabBuilder}; +use serde::{Deserialize, Serialize}; +use tokio::time::sleep; +use tracing::{info, warn}; + +#[derive(Debug)] +pub enum GitHubServiceError { + Client(octocrab::Error), + Auth(String), + Repository(String), + PullRequest(String), + Branch(String), +} + +impl std::fmt::Display for GitHubServiceError { + fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { + match self { + GitHubServiceError::Client(e) => write!(f, "GitHub client error: {}", e), + GitHubServiceError::Auth(e) => write!(f, "Authentication error: {}", e), + GitHubServiceError::Repository(e) => write!(f, "Repository error: {}", e), + GitHubServiceError::PullRequest(e) => write!(f, "Pull request error: {}", e), + GitHubServiceError::Branch(e) => write!(f, "Branch error: {}", e), + } + } +} + +impl std::error::Error for GitHubServiceError {} + +impl From for GitHubServiceError { + fn from(err: octocrab::Error) -> Self { + GitHubServiceError::Client(err) + } +} + +#[derive(Debug, Clone)] +pub struct GitHubRepoInfo { + pub owner: String, + pub repo_name: String, +} + +#[derive(Debug, Clone)] +pub struct CreatePrRequest { + pub title: String, + pub body: Option, + pub head_branch: String, + pub base_branch: String, +} + +#[derive(Debug, Clone, Serialize, Deserialize)] +pub struct PullRequestInfo { + pub number: i64, + pub url: String, + pub status: String, + pub merged: bool, + pub merged_at: Option>, + pub merge_commit_sha: Option, +} + +#[derive(Debug, Clone)] +pub struct GitHubService { + client: Octocrab, + retry_config: RetryConfig, +} + +#[derive(Debug, Clone)] +pub struct RetryConfig { + pub max_retries: u32, + pub base_delay: Duration, + pub max_delay: Duration, +} + +impl Default for RetryConfig { + fn default() -> Self { + Self { + max_retries: 3, + base_delay: Duration::from_secs(1), + max_delay: Duration::from_secs(30), + } + } +} + +impl GitHubService { + /// Create a new GitHub service with authentication + pub fn new(github_token: &str) -> Result { + let client = OctocrabBuilder::new() + .personal_token(github_token.to_string()) + .build() + .map_err(|e| { + GitHubServiceError::Auth(format!("Failed to create GitHub client: {}", e)) + })?; + + Ok(Self { + client, + retry_config: RetryConfig::default(), + }) + } + + /// Create a pull request on GitHub + pub async fn create_pr( + &self, + repo_info: &GitHubRepoInfo, + request: &CreatePrRequest, + ) -> Result { + self.with_retry(|| async { self.create_pr_internal(repo_info, request).await }) + .await + } + + async fn create_pr_internal( + &self, + repo_info: &GitHubRepoInfo, + request: &CreatePrRequest, + ) -> Result { + // Verify repository access + self.client + .repos(&repo_info.owner, &repo_info.repo_name) + .get() + .await + .map_err(|e| { + GitHubServiceError::Repository(format!( + "Cannot access repository {}/{}: {}", + repo_info.owner, repo_info.repo_name, e + )) + })?; + + // Check if the base branch exists + self.client + .repos(&repo_info.owner, &repo_info.repo_name) + .get_ref(&octocrab::params::repos::Reference::Branch( + request.base_branch.clone(), + )) + .await + .map_err(|e| { + GitHubServiceError::Branch(format!( + "Base branch '{}' does not exist: {}", + request.base_branch, e + )) + })?; + + // Check if the head branch exists + self.client + .repos(&repo_info.owner, &repo_info.repo_name) + .get_ref(&octocrab::params::repos::Reference::Branch( + request.head_branch.clone(), + )) + .await + .map_err(|e| { + GitHubServiceError::Branch(format!( + "Head branch '{}' does not exist. Make sure the branch was pushed successfully: {}", + request.head_branch, e + )) + })?; + + // Create the pull request + let pr = self + .client + .pulls(&repo_info.owner, &repo_info.repo_name) + .create(&request.title, &request.head_branch, &request.base_branch) + .body(request.body.as_deref().unwrap_or("")) + .send() + .await + .map_err(|e| match e { + octocrab::Error::GitHub { source, .. } => GitHubServiceError::PullRequest(format!( + "GitHub API error: {} (status: {})", + source.message, + source.status_code.as_u16() + )), + _ => GitHubServiceError::PullRequest(format!("Failed to create PR: {}", e)), + })?; + + let pr_info = PullRequestInfo { + number: pr.number as i64, + url: pr.html_url.map(|url| url.to_string()).unwrap_or_default(), + status: "open".to_string(), + merged: false, + merged_at: None, + merge_commit_sha: None, + }; + + info!( + "Created GitHub PR #{} for branch {} in {}/{}", + pr_info.number, request.head_branch, repo_info.owner, repo_info.repo_name + ); + + Ok(pr_info) + } + + /// Update and get the status of a pull request + pub async fn update_pr_status( + &self, + repo_info: &GitHubRepoInfo, + pr_number: i64, + ) -> Result { + self.with_retry(|| async { self.update_pr_status_internal(repo_info, pr_number).await }) + .await + } + + async fn update_pr_status_internal( + &self, + repo_info: &GitHubRepoInfo, + pr_number: i64, + ) -> Result { + let pr = self + .client + .pulls(&repo_info.owner, &repo_info.repo_name) + .get(pr_number as u64) + .await + .map_err(|e| { + GitHubServiceError::PullRequest(format!("Failed to get PR #{}: {}", pr_number, e)) + })?; + + let status = match pr.state { + Some(octocrab::models::IssueState::Open) => "open", + Some(octocrab::models::IssueState::Closed) => { + if pr.merged_at.is_some() { + "merged" + } else { + "closed" + } + } + None => "unknown", + Some(_) => "unknown", // Handle any other states + }; + + let pr_info = PullRequestInfo { + number: pr.number as i64, + url: pr.html_url.map(|url| url.to_string()).unwrap_or_default(), + status: status.to_string(), + merged: pr.merged_at.is_some(), + merged_at: pr.merged_at.map(|dt| dt.naive_utc().and_utc()), + merge_commit_sha: pr.merge_commit_sha.clone(), + }; + + Ok(pr_info) + } + + /// Retry wrapper for GitHub API calls with exponential backoff + async fn with_retry(&self, operation: F) -> Result + where + F: Fn() -> Fut, + Fut: std::future::Future>, + { + let mut last_error = None; + + for attempt in 0..=self.retry_config.max_retries { + match operation().await { + Ok(result) => return Ok(result), + Err(e) => { + last_error = Some(e); + + if attempt < self.retry_config.max_retries { + let delay = std::cmp::min( + self.retry_config.base_delay * 2_u32.pow(attempt), + self.retry_config.max_delay, + ); + + warn!( + "GitHub API call failed (attempt {}/{}), retrying in {:?}: {}", + attempt + 1, + self.retry_config.max_retries + 1, + delay, + last_error.as_ref().unwrap() + ); + + sleep(delay).await; + } + } + } + } + + Err(last_error.unwrap()) + } +} diff --git a/backend/src/services/mod.rs b/backend/src/services/mod.rs index 02b70e2b..20a1c8dc 100644 --- a/backend/src/services/mod.rs +++ b/backend/src/services/mod.rs @@ -1,5 +1,13 @@ pub mod analytics; +pub mod git_service; +pub mod github_service; +pub mod notification_service; pub mod pr_monitor; +pub mod process_service; pub use analytics::{generate_user_id, AnalyticsConfig, AnalyticsService}; +pub use git_service::{GitService, GitServiceError}; +pub use github_service::{CreatePrRequest, GitHubRepoInfo, GitHubService, GitHubServiceError}; +pub use notification_service::{NotificationConfig, NotificationService}; pub use pr_monitor::PrMonitorService; +pub use process_service::ProcessService; diff --git a/backend/src/services/notification_service.rs b/backend/src/services/notification_service.rs new file mode 100644 index 00000000..7bb6dd5e --- /dev/null +++ b/backend/src/services/notification_service.rs @@ -0,0 +1,263 @@ +use std::sync::OnceLock; + +use crate::models::config::SoundFile; + +/// Service for handling cross-platform notifications including sound alerts and push notifications +#[derive(Debug, Clone)] +pub struct NotificationService { + sound_enabled: bool, + push_enabled: bool, +} + +/// Configuration for notifications +#[derive(Debug, Clone)] +pub struct NotificationConfig { + pub sound_enabled: bool, + pub push_enabled: bool, +} + +impl Default for NotificationConfig { + fn default() -> Self { + Self { + sound_enabled: true, + push_enabled: true, + } + } +} + +/// Cache for WSL root path from PowerShell +static WSL_ROOT_PATH_CACHE: OnceLock> = OnceLock::new(); + +impl NotificationService { + /// Create a new NotificationService with the given configuration + pub fn new(config: NotificationConfig) -> Self { + Self { + sound_enabled: config.sound_enabled, + push_enabled: config.push_enabled, + } + } + + /// Send both sound and push notifications if enabled + pub async fn notify(&self, title: &str, message: &str, sound_file: &SoundFile) { + if self.sound_enabled { + self.play_sound_notification(sound_file).await; + } + + if self.push_enabled { + self.send_push_notification(title, message).await; + } + } + + /// Play a system sound notification across platforms + pub async fn play_sound_notification(&self, sound_file: &SoundFile) { + if !self.sound_enabled { + return; + } + + let file_path = match sound_file.get_path().await { + Ok(path) => path, + Err(e) => { + tracing::error!("Failed to create cached sound file: {}", e); + return; + } + }; + + // Use platform-specific sound notification + // Note: spawn() calls are intentionally not awaited - sound notifications should be fire-and-forget + if cfg!(target_os = "macos") { + let _ = tokio::process::Command::new("afplay") + .arg(&file_path) + .spawn(); + } else if cfg!(target_os = "linux") && !crate::utils::is_wsl2() { + // Try different Linux audio players + if tokio::process::Command::new("paplay") + .arg(&file_path) + .spawn() + .is_ok() + { + // Success with paplay + } else if tokio::process::Command::new("aplay") + .arg(&file_path) + .spawn() + .is_ok() + { + // Success with aplay + } else { + // Try system bell as fallback + let _ = tokio::process::Command::new("echo") + .arg("-e") + .arg("\\a") + .spawn(); + } + } else if cfg!(target_os = "windows") + || (cfg!(target_os = "linux") && crate::utils::is_wsl2()) + { + // Convert WSL path to Windows path if in WSL2 + let file_path = if crate::utils::is_wsl2() { + if let Some(windows_path) = Self::wsl_to_windows_path(&file_path).await { + windows_path + } else { + file_path.to_string_lossy().to_string() + } + } else { + file_path.to_string_lossy().to_string() + }; + + let _ = tokio::process::Command::new("powershell.exe") + .arg("-c") + .arg(format!( + r#"(New-Object Media.SoundPlayer "{}").PlaySync()"#, + file_path + )) + .spawn(); + } + } + + /// Send a cross-platform push notification + pub async fn send_push_notification(&self, title: &str, message: &str) { + if !self.push_enabled { + return; + } + + if cfg!(target_os = "macos") { + self.send_macos_notification(title, message).await; + } else if cfg!(target_os = "linux") && !crate::utils::is_wsl2() { + self.send_linux_notification(title, message).await; + } else if cfg!(target_os = "windows") + || (cfg!(target_os = "linux") && crate::utils::is_wsl2()) + { + self.send_windows_notification(title, message).await; + } + } + + /// Send macOS notification using osascript + async fn send_macos_notification(&self, title: &str, message: &str) { + let script = format!( + r#"display notification "{message}" with title "{title}" sound name "Glass""#, + message = message.replace('"', r#"\""#), + title = title.replace('"', r#"\""#) + ); + + let _ = tokio::process::Command::new("osascript") + .arg("-e") + .arg(script) + .spawn(); + } + + /// Send Linux notification using notify-rust + async fn send_linux_notification(&self, title: &str, message: &str) { + use notify_rust::Notification; + + let title = title.to_string(); + let message = message.to_string(); + + let _handle = tokio::task::spawn_blocking(move || { + if let Err(e) = Notification::new() + .summary(&title) + .body(&message) + .timeout(10000) + .show() + { + tracing::error!("Failed to send Linux notification: {}", e); + } + }); + drop(_handle); // Don't await, fire-and-forget + } + + /// Send Windows/WSL notification using PowerShell toast script + async fn send_windows_notification(&self, title: &str, message: &str) { + let script_path = match crate::utils::get_powershell_script().await { + Ok(path) => path, + Err(e) => { + tracing::error!("Failed to get PowerShell script: {}", e); + return; + } + }; + + // Convert WSL path to Windows path if in WSL2 + let script_path_str = if crate::utils::is_wsl2() { + if let Some(windows_path) = Self::wsl_to_windows_path(&script_path).await { + windows_path + } else { + script_path.to_string_lossy().to_string() + } + } else { + script_path.to_string_lossy().to_string() + }; + + let _ = tokio::process::Command::new("powershell.exe") + .arg("-NoProfile") + .arg("-ExecutionPolicy") + .arg("Bypass") + .arg("-File") + .arg(script_path_str) + .arg("-Title") + .arg(title) + .arg("-Message") + .arg(message) + .spawn(); + } + + /// Get WSL root path via PowerShell (cached) + async fn get_wsl_root_path() -> Option { + if let Some(cached) = WSL_ROOT_PATH_CACHE.get() { + return cached.clone(); + } + + match tokio::process::Command::new("powershell.exe") + .arg("-c") + .arg("(Get-Location).Path -replace '^.*::', ''") + .current_dir("/") + .output() + .await + { + Ok(output) => { + match String::from_utf8(output.stdout) { + Ok(pwd_str) => { + let pwd = pwd_str.trim(); + tracing::info!("WSL root path detected: {}", pwd); + + // Cache the result + let _ = WSL_ROOT_PATH_CACHE.set(Some(pwd.to_string())); + return Some(pwd.to_string()); + } + Err(e) => { + tracing::error!("Failed to parse PowerShell pwd output as UTF-8: {}", e); + } + } + } + Err(e) => { + tracing::error!("Failed to execute PowerShell pwd command: {}", e); + } + } + + // Cache the failure result + let _ = WSL_ROOT_PATH_CACHE.set(None); + None + } + + /// Convert WSL path to Windows UNC path for PowerShell + async fn wsl_to_windows_path(wsl_path: &std::path::Path) -> Option { + let path_str = wsl_path.to_string_lossy(); + + // Relative paths work fine as-is in PowerShell + if !path_str.starts_with('/') { + tracing::debug!("Using relative path as-is: {}", path_str); + return Some(path_str.to_string()); + } + + // Get cached WSL root path from PowerShell + if let Some(wsl_root) = Self::get_wsl_root_path().await { + // Simply concatenate WSL root with the absolute path - PowerShell doesn't mind / + let windows_path = format!("{}{}", wsl_root, path_str); + tracing::debug!("WSL path converted: {} -> {}", path_str, windows_path); + Some(windows_path) + } else { + tracing::error!( + "Failed to determine WSL root path for conversion: {}", + path_str + ); + None + } + } +} diff --git a/backend/src/services/pr_monitor.rs b/backend/src/services/pr_monitor.rs index 0f7e845f..aa0ab72f 100644 --- a/backend/src/services/pr_monitor.rs +++ b/backend/src/services/pr_monitor.rs @@ -1,16 +1,17 @@ use std::{sync::Arc, time::Duration}; -use chrono::Utc; -use octocrab::{models::IssueState, Octocrab}; use sqlx::SqlitePool; use tokio::{sync::RwLock, time::interval}; use tracing::{debug, error, info, warn}; use uuid::Uuid; -use crate::models::{ - config::Config, - task::{Task, TaskStatus}, - task_attempt::TaskAttempt, +use crate::{ + models::{ + config::Config, + task::{Task, TaskStatus}, + task_attempt::TaskAttempt, + }, + services::{GitHubRepoInfo, GitHubService, GitService}, }; /// Service to monitor GitHub PRs and update task status when they are merged @@ -123,22 +124,33 @@ impl PrMonitorService { let mut pr_infos = Vec::new(); for row in rows { - // Extract owner and repo from git_repo_path - if let Ok((owner, repo_name)) = Self::extract_github_repo_info(&row.git_repo_path) { - pr_infos.push(PrInfo { - attempt_id: row.attempt_id, - task_id: row.task_id, - project_id: row.project_id, - pr_number: row.pr_number, - repo_owner: owner, - repo_name, - github_token: github_token.to_string(), - }); - } else { - warn!( - "Could not extract repo info from git path: {}", - row.git_repo_path - ); + // Get GitHub repo info from local git repository + match GitService::new(&row.git_repo_path) { + Ok(git_service) => match git_service.get_github_repo_info() { + Ok((owner, repo_name)) => { + pr_infos.push(PrInfo { + attempt_id: row.attempt_id, + task_id: row.task_id, + project_id: row.project_id, + pr_number: row.pr_number, + repo_owner: owner, + repo_name, + github_token: github_token.to_string(), + }); + } + Err(e) => { + warn!( + "Could not extract repo info from git path {}: {}", + row.git_repo_path, e + ); + } + }, + Err(e) => { + warn!( + "Could not create git service for path {}: {}", + row.git_repo_path, e + ); + } } } @@ -150,58 +162,41 @@ impl PrMonitorService { &self, pr_info: &PrInfo, ) -> Result<(), Box> { - let octocrab = Octocrab::builder() - .personal_token(pr_info.github_token.clone()) - .build()?; + let github_service = GitHubService::new(&pr_info.github_token)?; - let pr = octocrab - .pulls(&pr_info.repo_owner, &pr_info.repo_name) - .get(pr_info.pr_number as u64) - .await?; - - let new_status = match pr.state { - Some(IssueState::Open) => "open", - Some(IssueState::Closed) => { - if pr.merged_at.is_some() { - "merged" - } else { - "closed" - } - } - None => "unknown", // Should not happen for PRs - Some(_) => "unknown", // Handle any other states + let repo_info = GitHubRepoInfo { + owner: pr_info.repo_owner.clone(), + repo_name: pr_info.repo_name.clone(), }; + let pr_status = github_service + .update_pr_status(&repo_info, pr_info.pr_number) + .await?; + debug!( "PR #{} status: {} (was open)", - pr_info.pr_number, new_status + pr_info.pr_number, pr_status.status ); // Update the PR status in the database - if new_status != "open" { + if pr_status.status != "open" { // Extract merge commit SHA if the PR was merged - let merge_commit_sha = if new_status == "merged" { - pr.merge_commit_sha.as_deref() - } else { - None - }; + let merge_commit_sha = pr_status.merge_commit_sha.as_deref(); TaskAttempt::update_pr_status( &self.pool, pr_info.attempt_id, - new_status, - pr.merged_at.map(|dt| dt.with_timezone(&Utc)), + &pr_status.status, + pr_status.merged_at, merge_commit_sha, ) .await?; // If the PR was merged, update the task status to done - if new_status == "merged" { + if pr_status.merged { info!( - "PR #{} was merged with commit {}, updating task {} to done", - pr_info.pr_number, - merge_commit_sha.unwrap_or("unknown"), - pr_info.task_id + "PR #{} was merged, updating task {} to done", + pr_info.pr_number, pr_info.task_id ); Task::update_status( @@ -216,30 +211,4 @@ impl PrMonitorService { Ok(()) } - - /// Extract GitHub owner and repo name from git repo path (reused from TaskAttempt) - fn extract_github_repo_info( - git_repo_path: &str, - ) -> Result<(String, String), Box> { - use git2::Repository; - - // Try to extract from remote origin URL - let repo = Repository::open(git_repo_path)?; - let remote = repo - .find_remote("origin") - .map_err(|_| "No 'origin' remote found")?; - - let url = remote.url().ok_or("Remote origin has no URL")?; - - // Parse GitHub URL (supports both HTTPS and SSH formats) - let github_regex = regex::Regex::new(r"github\.com[:/]([^/]+)/(.+?)(?:\.git)?/?$")?; - - if let Some(captures) = github_regex.captures(url) { - let owner = captures.get(1).unwrap().as_str().to_string(); - let repo_name = captures.get(2).unwrap().as_str().to_string(); - Ok((owner, repo_name)) - } else { - Err(format!("Not a GitHub repository: {}", url).into()) - } - } } diff --git a/backend/src/services/process_service.rs b/backend/src/services/process_service.rs new file mode 100644 index 00000000..9cad7d2c --- /dev/null +++ b/backend/src/services/process_service.rs @@ -0,0 +1,686 @@ +use sqlx::SqlitePool; +use tracing::{debug, info}; +use uuid::Uuid; + +use crate::{ + executor::Executor, + models::{ + execution_process::{CreateExecutionProcess, ExecutionProcess, ExecutionProcessType}, + executor_session::{CreateExecutorSession, ExecutorSession}, + project::Project, + task::Task, + task_attempt::{TaskAttempt, TaskAttemptError, TaskAttemptStatus}, + task_attempt_activity::{CreateTaskAttemptActivity, TaskAttemptActivity}, + }, + utils::shell::get_shell_command, +}; + +/// Service responsible for managing process execution lifecycle +pub struct ProcessService; + +impl ProcessService { + /// Start the execution flow for a task attempt (setup script + executor) + pub async fn start_execution( + pool: &SqlitePool, + app_state: &crate::app_state::AppState, + attempt_id: Uuid, + task_id: Uuid, + project_id: Uuid, + ) -> Result<(), TaskAttemptError> { + use crate::models::task::{Task, TaskStatus}; + + // Load required entities + let (task_attempt, project) = + Self::load_execution_context(pool, attempt_id, project_id).await?; + + // Update task status to indicate execution has started + Task::update_status(pool, task_id, project_id, TaskStatus::InProgress).await?; + + // Determine execution sequence based on project configuration + if Self::should_run_setup_script(&project) { + Self::start_setup_script( + pool, + app_state, + attempt_id, + task_id, + &project, + &task_attempt.worktree_path, + ) + .await + } else { + Self::start_coding_agent(pool, app_state, attempt_id, task_id, project_id).await + } + } + + /// Start the coding agent after setup is complete or if no setup is needed + pub async fn start_coding_agent( + pool: &SqlitePool, + app_state: &crate::app_state::AppState, + attempt_id: Uuid, + task_id: Uuid, + _project_id: Uuid, + ) -> Result<(), TaskAttemptError> { + let task_attempt = TaskAttempt::find_by_id(pool, attempt_id) + .await? + .ok_or(TaskAttemptError::TaskNotFound)?; + + let executor_config = Self::resolve_executor_config(&task_attempt.executor); + + Self::start_process_execution( + pool, + app_state, + attempt_id, + task_id, + crate::executor::ExecutorType::CodingAgent(executor_config), + "Starting executor".to_string(), + TaskAttemptStatus::ExecutorRunning, + ExecutionProcessType::CodingAgent, + &task_attempt.worktree_path, + ) + .await + } + + /// Start a dev server for this task attempt + pub async fn start_dev_server( + pool: &SqlitePool, + app_state: &crate::app_state::AppState, + attempt_id: Uuid, + task_id: Uuid, + project_id: Uuid, + ) -> Result<(), TaskAttemptError> { + // Ensure worktree exists (recreate if needed for cold task support) + let worktree_path = + TaskAttempt::ensure_worktree_exists(pool, attempt_id, project_id, "dev server").await?; + + // Get the project to access the dev_script + let project = Project::find_by_id(pool, project_id) + .await? + .ok_or(TaskAttemptError::TaskNotFound)?; + + let dev_script = project.dev_script.ok_or_else(|| { + TaskAttemptError::ValidationError( + "No dev script configured for this project".to_string(), + ) + })?; + + if dev_script.trim().is_empty() { + return Err(TaskAttemptError::ValidationError( + "Dev script is empty".to_string(), + )); + } + + let result = Self::start_process_execution( + pool, + app_state, + attempt_id, + task_id, + crate::executor::ExecutorType::DevServer(dev_script), + "Starting dev server".to_string(), + TaskAttemptStatus::ExecutorRunning, // Dev servers don't create activities, just use generic status + ExecutionProcessType::DevServer, + &worktree_path, + ) + .await; + + if result.is_ok() { + app_state + .track_analytics_event( + "dev_server_started", + Some(serde_json::json!({ + "task_id": task_id.to_string(), + "project_id": project_id.to_string(), + "attempt_id": attempt_id.to_string() + })), + ) + .await; + } + + result + } + + /// Start a follow-up execution using the same executor type as the first process + /// Returns the attempt_id that was actually used (always the original attempt_id for session continuity) + pub async fn start_followup_execution( + pool: &SqlitePool, + app_state: &crate::app_state::AppState, + attempt_id: Uuid, + task_id: Uuid, + project_id: Uuid, + prompt: &str, + ) -> Result { + use crate::models::task::{Task, TaskStatus}; + + // Get the current task attempt to check if worktree is deleted + let current_attempt = TaskAttempt::find_by_id(pool, attempt_id) + .await? + .ok_or(TaskAttemptError::TaskNotFound)?; + + let actual_attempt_id = attempt_id; + + if current_attempt.worktree_deleted { + info!( + "Resurrecting deleted attempt {} (branch: {}) for followup execution - maintaining session continuity", + attempt_id, current_attempt.branch + ); + } else { + info!( + "Continuing followup execution on active attempt {} (branch: {})", + attempt_id, current_attempt.branch + ); + } + + // Update task status to indicate follow-up execution has started + Task::update_status(pool, task_id, project_id, TaskStatus::InProgress).await?; + + // Ensure worktree exists (recreate if needed for cold task support) + // This will resurrect the worktree at the exact same path for session continuity + let worktree_path = + TaskAttempt::ensure_worktree_exists(pool, actual_attempt_id, project_id, "followup") + .await?; + + // Find the most recent coding agent execution process to get the executor type + // Look up processes from the ORIGINAL attempt to find the session + let execution_processes = + ExecutionProcess::find_by_task_attempt_id(pool, attempt_id).await?; + let most_recent_coding_agent = execution_processes + .iter() + .rev() // Reverse to get most recent first (since they're ordered by created_at ASC) + .find(|p| matches!(p.process_type, ExecutionProcessType::CodingAgent)) + .ok_or_else(|| { + tracing::error!( + "No previous coding agent execution found for task attempt {}. Found {} processes: {:?}", + attempt_id, + execution_processes.len(), + execution_processes.iter().map(|p| format!("{:?}", p.process_type)).collect::>() + ); + TaskAttemptError::ValidationError("No previous coding agent execution found for follow-up".to_string()) + })?; + + // Get the executor session to find the session ID + // This looks up the session from the original attempt's processes + let executor_session = + ExecutorSession::find_by_execution_process_id(pool, most_recent_coding_agent.id) + .await? + .ok_or_else(|| { + tracing::error!( + "No executor session found for execution process {} (task attempt {})", + most_recent_coding_agent.id, + attempt_id + ); + TaskAttemptError::ValidationError( + "No executor session found for follow-up".to_string(), + ) + })?; + + // Determine the executor config from the stored executor_type + let executor_config = match most_recent_coding_agent.executor_type.as_deref() { + Some("claude") => crate::executor::ExecutorConfig::Claude, + Some("amp") => crate::executor::ExecutorConfig::Amp, + Some("gemini") => crate::executor::ExecutorConfig::Gemini, + Some("echo") => crate::executor::ExecutorConfig::Echo, + Some("opencode") => crate::executor::ExecutorConfig::Opencode, + _ => { + tracing::error!( + "Invalid or missing executor type '{}' for execution process {} (task attempt {})", + most_recent_coding_agent.executor_type.as_deref().unwrap_or("None"), + most_recent_coding_agent.id, + attempt_id + ); + return Err(TaskAttemptError::ValidationError(format!( + "Invalid executor type for follow-up: {}", + most_recent_coding_agent + .executor_type + .as_deref() + .unwrap_or("None") + ))); + } + }; + + // Try to use follow-up with session ID, but fall back to new session if it fails + let followup_executor = if let Some(session_id) = &executor_session.session_id { + // First try with session ID for continuation + debug!( + "SESSION_FOLLOWUP: Attempting follow-up execution with session ID: {} (attempt: {}, worktree: {})", + session_id, attempt_id, worktree_path + ); + crate::executor::ExecutorType::FollowUpCodingAgent { + config: executor_config.clone(), + session_id: executor_session.session_id.clone(), + prompt: prompt.to_string(), + } + } else { + // No session ID available, start new session + tracing::warn!( + "SESSION_FOLLOWUP: No session ID available for follow-up execution on attempt {}, starting new session (worktree: {})", + attempt_id, worktree_path + ); + crate::executor::ExecutorType::CodingAgent(executor_config.clone()) + }; + + // Try to start the follow-up execution + let execution_result = Self::start_process_execution( + pool, + app_state, + actual_attempt_id, + task_id, + followup_executor, + "Starting follow-up executor".to_string(), + TaskAttemptStatus::ExecutorRunning, + ExecutionProcessType::CodingAgent, + &worktree_path, + ) + .await; + + // If follow-up execution failed and we tried to use a session ID, + // fall back to a new session + if execution_result.is_err() && executor_session.session_id.is_some() { + tracing::warn!( + "SESSION_FOLLOWUP: Follow-up execution with session ID '{}' failed for attempt {}, falling back to new session. Error: {:?}", + executor_session.session_id.as_ref().unwrap(), + attempt_id, + execution_result.as_ref().err() + ); + + // Create a new session instead of trying to resume + let new_session_executor = crate::executor::ExecutorType::CodingAgent(executor_config); + + Self::start_process_execution( + pool, + app_state, + actual_attempt_id, + task_id, + new_session_executor, + "Starting new executor session (follow-up session failed)".to_string(), + TaskAttemptStatus::ExecutorRunning, + ExecutionProcessType::CodingAgent, + &worktree_path, + ) + .await?; + } else { + // Either it succeeded or we already tried without session ID + execution_result?; + } + + Ok(actual_attempt_id) + } + + /// Unified function to start any type of process execution + #[allow(clippy::too_many_arguments)] + pub async fn start_process_execution( + pool: &SqlitePool, + app_state: &crate::app_state::AppState, + attempt_id: Uuid, + task_id: Uuid, + executor_type: crate::executor::ExecutorType, + activity_note: String, + activity_status: TaskAttemptStatus, + process_type: ExecutionProcessType, + worktree_path: &str, + ) -> Result<(), TaskAttemptError> { + let process_id = Uuid::new_v4(); + + // Create execution process record + let _execution_process = Self::create_execution_process_record( + pool, + attempt_id, + process_id, + &executor_type, + process_type.clone(), + worktree_path, + ) + .await?; + + // Create executor session for coding agents + if matches!(process_type, ExecutionProcessType::CodingAgent) { + // Extract follow-up prompt if this is a follow-up execution + let followup_prompt = match &executor_type { + crate::executor::ExecutorType::FollowUpCodingAgent { prompt, .. } => { + Some(prompt.clone()) + } + _ => None, + }; + Self::create_executor_session_record( + pool, + attempt_id, + task_id, + process_id, + followup_prompt, + ) + .await?; + } + + // Create activity record (skip for dev servers as they run in parallel) + if !matches!(process_type, ExecutionProcessType::DevServer) { + Self::create_activity_record(pool, process_id, activity_status.clone(), &activity_note) + .await?; + } + + tracing::info!("Starting {} for task attempt {}", activity_note, attempt_id); + + // Execute the process + let child = Self::execute_process( + &executor_type, + pool, + task_id, + attempt_id, + process_id, + worktree_path, + ) + .await?; + + // Register for monitoring + Self::register_for_monitoring(app_state, process_id, attempt_id, &process_type, child) + .await; + + tracing::info!( + "Started execution {} for task attempt {}", + process_id, + attempt_id + ); + Ok(()) + } + + /// Load the execution context (task attempt and project) with validation + async fn load_execution_context( + pool: &SqlitePool, + attempt_id: Uuid, + project_id: Uuid, + ) -> Result<(TaskAttempt, Project), TaskAttemptError> { + let task_attempt = TaskAttempt::find_by_id(pool, attempt_id) + .await? + .ok_or(TaskAttemptError::TaskNotFound)?; + + let project = Project::find_by_id(pool, project_id) + .await? + .ok_or(TaskAttemptError::ProjectNotFound)?; + + Ok((task_attempt, project)) + } + + /// Check if setup script should be executed + fn should_run_setup_script(project: &Project) -> bool { + project + .setup_script + .as_ref() + .map(|script| !script.trim().is_empty()) + .unwrap_or(false) + } + + /// Start the setup script execution + async fn start_setup_script( + pool: &SqlitePool, + app_state: &crate::app_state::AppState, + attempt_id: Uuid, + task_id: Uuid, + project: &Project, + worktree_path: &str, + ) -> Result<(), TaskAttemptError> { + let setup_script = project.setup_script.as_ref().unwrap(); + + Self::start_process_execution( + pool, + app_state, + attempt_id, + task_id, + crate::executor::ExecutorType::SetupScript(setup_script.clone()), + "Starting setup script".to_string(), + TaskAttemptStatus::SetupRunning, + ExecutionProcessType::SetupScript, + worktree_path, + ) + .await + } + + /// Resolve executor configuration from string name + fn resolve_executor_config(executor_name: &Option) -> crate::executor::ExecutorConfig { + match executor_name.as_ref().map(|s| s.as_str()) { + Some("claude") => crate::executor::ExecutorConfig::Claude, + Some("amp") => crate::executor::ExecutorConfig::Amp, + Some("gemini") => crate::executor::ExecutorConfig::Gemini, + Some("opencode") => crate::executor::ExecutorConfig::Opencode, + _ => crate::executor::ExecutorConfig::Echo, // Default for "echo" or None + } + } + + /// Create execution process database record + async fn create_execution_process_record( + pool: &SqlitePool, + attempt_id: Uuid, + process_id: Uuid, + executor_type: &crate::executor::ExecutorType, + process_type: ExecutionProcessType, + worktree_path: &str, + ) -> Result { + let (shell_cmd, shell_arg) = get_shell_command(); + let (command, args, executor_type_string) = match executor_type { + crate::executor::ExecutorType::SetupScript(_) => ( + shell_cmd.to_string(), + Some(serde_json::to_string(&[shell_arg, "setup_script"]).unwrap()), + None, // Setup scripts don't have an executor type + ), + crate::executor::ExecutorType::DevServer(_) => ( + shell_cmd.to_string(), + Some(serde_json::to_string(&[shell_arg, "dev_server"]).unwrap()), + None, // Dev servers don't have an executor type + ), + crate::executor::ExecutorType::CodingAgent(config) => { + let executor_type_str = match config { + crate::executor::ExecutorConfig::Echo => "echo", + crate::executor::ExecutorConfig::Claude => "claude", + crate::executor::ExecutorConfig::Amp => "amp", + crate::executor::ExecutorConfig::Gemini => "gemini", + crate::executor::ExecutorConfig::Opencode => "opencode", + }; + ( + "executor".to_string(), + None, + Some(executor_type_str.to_string()), + ) + } + crate::executor::ExecutorType::FollowUpCodingAgent { config, .. } => { + let executor_type_str = match config { + crate::executor::ExecutorConfig::Echo => "echo", + crate::executor::ExecutorConfig::Claude => "claude", + crate::executor::ExecutorConfig::Amp => "amp", + crate::executor::ExecutorConfig::Gemini => "gemini", + crate::executor::ExecutorConfig::Opencode => "opencode", + }; + ( + "followup_executor".to_string(), + None, + Some(executor_type_str.to_string()), + ) + } + }; + + let create_process = CreateExecutionProcess { + task_attempt_id: attempt_id, + process_type, + executor_type: executor_type_string, + command, + args, + working_directory: worktree_path.to_string(), + }; + + ExecutionProcess::create(pool, &create_process, process_id) + .await + .map_err(TaskAttemptError::from) + } + + /// Create executor session record for coding agents + async fn create_executor_session_record( + pool: &SqlitePool, + attempt_id: Uuid, + task_id: Uuid, + process_id: Uuid, + followup_prompt: Option, + ) -> Result<(), TaskAttemptError> { + // Use follow-up prompt if provided, otherwise get the task to create prompt + let prompt = if let Some(followup_prompt) = followup_prompt { + followup_prompt + } else { + let task = Task::find_by_id(pool, task_id) + .await? + .ok_or(TaskAttemptError::TaskNotFound)?; + format!("{}\n\n{}", task.title, task.description.unwrap_or_default()) + }; + + let session_id = Uuid::new_v4(); + let create_session = CreateExecutorSession { + task_attempt_id: attempt_id, + execution_process_id: process_id, + prompt: Some(prompt), + }; + + ExecutorSession::create(pool, &create_session, session_id) + .await + .map(|_| ()) + .map_err(TaskAttemptError::from) + } + + /// Create activity record for process start + async fn create_activity_record( + pool: &SqlitePool, + process_id: Uuid, + activity_status: TaskAttemptStatus, + activity_note: &str, + ) -> Result<(), TaskAttemptError> { + let activity_id = Uuid::new_v4(); + let create_activity = CreateTaskAttemptActivity { + execution_process_id: process_id, + status: Some(activity_status.clone()), + note: Some(activity_note.to_string()), + }; + + TaskAttemptActivity::create(pool, &create_activity, activity_id, activity_status) + .await + .map(|_| ()) + .map_err(TaskAttemptError::from) + } + + /// Execute the process based on type + async fn execute_process( + executor_type: &crate::executor::ExecutorType, + pool: &SqlitePool, + task_id: Uuid, + attempt_id: Uuid, + process_id: Uuid, + worktree_path: &str, + ) -> Result { + use crate::executors::{DevServerExecutor, SetupScriptExecutor}; + + let result = match executor_type { + crate::executor::ExecutorType::SetupScript(script) => { + let executor = SetupScriptExecutor { + script: script.clone(), + }; + executor + .execute_streaming(pool, task_id, attempt_id, process_id, worktree_path) + .await + } + crate::executor::ExecutorType::DevServer(script) => { + let executor = DevServerExecutor { + script: script.clone(), + }; + executor + .execute_streaming(pool, task_id, attempt_id, process_id, worktree_path) + .await + } + crate::executor::ExecutorType::CodingAgent(config) => { + let executor = config.create_executor(); + executor + .execute_streaming(pool, task_id, attempt_id, process_id, worktree_path) + .await + } + crate::executor::ExecutorType::FollowUpCodingAgent { + config, + session_id, + prompt, + } => { + use crate::executors::{ + AmpFollowupExecutor, ClaudeFollowupExecutor, GeminiFollowupExecutor, + OpencodeFollowupExecutor, + }; + + let executor: Box = match config { + crate::executor::ExecutorConfig::Claude => { + if let Some(sid) = session_id { + Box::new(ClaudeFollowupExecutor { + session_id: sid.clone(), + prompt: prompt.clone(), + }) + } else { + return Err(TaskAttemptError::TaskNotFound); // No session ID for followup + } + } + crate::executor::ExecutorConfig::Amp => { + if let Some(tid) = session_id { + Box::new(AmpFollowupExecutor { + thread_id: tid.clone(), + prompt: prompt.clone(), + }) + } else { + return Err(TaskAttemptError::TaskNotFound); // No thread ID for followup + } + } + crate::executor::ExecutorConfig::Gemini => { + if let Some(sid) = session_id { + Box::new(GeminiFollowupExecutor { + session_id: sid.clone(), + prompt: prompt.clone(), + }) + } else { + return Err(TaskAttemptError::TaskNotFound); // No session ID for followup + } + } + crate::executor::ExecutorConfig::Echo => { + // Echo doesn't support followup, use regular echo + config.create_executor() + } + crate::executor::ExecutorConfig::Opencode => { + if let Some(sid) = session_id { + Box::new(OpencodeFollowupExecutor { + session_id: sid.clone(), + prompt: prompt.clone(), + }) + } else { + return Err(TaskAttemptError::TaskNotFound); // No session ID for followup + } + } + }; + + executor + .execute_streaming(pool, task_id, attempt_id, process_id, worktree_path) + .await + } + }; + + result.map_err(|e| TaskAttemptError::Git(git2::Error::from_str(&e.to_string()))) + } + + /// Register process for monitoring + async fn register_for_monitoring( + app_state: &crate::app_state::AppState, + process_id: Uuid, + attempt_id: Uuid, + process_type: &ExecutionProcessType, + child: command_group::AsyncGroupChild, + ) { + let execution_type = match process_type { + ExecutionProcessType::SetupScript => crate::app_state::ExecutionType::SetupScript, + ExecutionProcessType::CodingAgent => crate::app_state::ExecutionType::CodingAgent, + ExecutionProcessType::DevServer => crate::app_state::ExecutionType::DevServer, + }; + + app_state + .add_running_execution( + process_id, + crate::app_state::RunningExecution { + task_attempt_id: attempt_id, + _execution_type: execution_type, + child, + }, + ) + .await; + } +} diff --git a/frontend/src/hooks/useTaskDetails.ts b/frontend/src/hooks/useTaskDetails.ts index f01da9dd..e87f9815 100644 --- a/frontend/src/hooks/useTaskDetails.ts +++ b/frontend/src/hooks/useTaskDetails.ts @@ -60,33 +60,18 @@ export function useTaskDetails( // Check if any execution process is currently running const isAttemptRunning = useMemo(() => { - if (!selectedAttempt || attemptData.activities.length === 0 || isStopping) { + if (!selectedAttempt || isStopping) { return false; } - const latestActivitiesByProcess = new Map< - string, - TaskAttemptActivityWithPrompt - >(); - - attemptData.activities.forEach((activity) => { - const existing = latestActivitiesByProcess.get( - activity.execution_process_id - ); - if ( - !existing || - new Date(activity.created_at) > new Date(existing.created_at) - ) { - latestActivitiesByProcess.set(activity.execution_process_id, activity); - } - }); - - return Array.from(latestActivitiesByProcess.values()).some( - (activity) => - activity.status === 'setuprunning' || - activity.status === 'executorrunning' + return attemptData.processes.some( + (process) => + (process.process_type === 'codingagent' || + process.process_type === 'setupscript') && + process.status !== 'completed' && + process.status !== 'killed' ); - }, [selectedAttempt, attemptData.activities, isStopping]); + }, [selectedAttempt, attemptData.processes, isStopping]); // Check if follow-up should be enabled const canSendFollowUp = useMemo(() => {