Move tasks with pending approval to In Reviewstate (vibe-kanban) (#976)
* Done! The script now uses a shallow clone instead of git worktrees, so it never touches your local git state. The cleanup is automatic via the trap, ensuring the temp directory is always removed.
Done! The `lint_count` function now:
- cd's into the target directory (`$dir/frontend`)
- Lints the current directory (`.`)
- Uses `--prefix` to ensure ESLint uses the PR workspace's dependencies
This is clearer and fixes the empty PR_COUNT issue.
Perfect! Let me create a summary of the changes:
I've successfully implemented the feature to move tasks with pending approvals to the "In Review" state. Here's what was changed:
1. **Modified `crates/services/src/services/approvals.rs`:**
- Added imports for `ExecutionProcess`, `Task`, `TaskStatus`, and `TaskAttempt`
- Updated `create_from_session()` method to change task status from `InProgress` to `InReview` when an approval is requested
- Only updates status if the task is currently in `InProgress`
2. **Modified `crates/server/src/routes/approvals.rs`:**
- Added imports for `ExecutionProcess`, `Task`, `TaskStatus`, and `TaskAttempt`
- Updated `respond_to_approval()` handler to change task status back to `InProgress` when approval is responded to (approved/denied)
- Special handling for the `exit_plan_mode` case: when exiting plan mode, a new execution process starts which automatically sets status to `InProgress` via `start_execution()`
- Only updates status if the task is currently in `InReview`
3. **Generated TypeScript types:**
- Ran `cargo run --bin generate_types` to ensure frontend types are in sync
**Status Flow:**
- `InProgress` → `InReview`: When an approval request is created
- `InReview` → `InProgress`: When approval is approved/denied (process continues)
- `InReview` → `InProgress`: Special case for exit_plan_mode - new process starts automatically
**Edge Cases Handled:**
- Only updates status if current status is appropriate (prevents conflicts with merged tasks)
- Graceful error handling with warnings if status update fails
- Special handling for `exit_plan_mode` tool approvals
- No status change on timeout (task remains in `InReview` until process completes)
All changes compile successfully and pass clippy linting checks!
Cleanup script changes for task attempt f091024c-6c72-493e-9fe3-db2bd523cc7b
Perfect! Let me create a summary of the changes:
I've successfully simplified the approval response logic to update task status for **ALL** approvals uniformly, including exit_plan_mode. Here's what changed:
**Modified `crates/server/src/routes/approvals.rs`:**
**Before:** Status update was split into two branches:
- exit_plan_mode: Status updated via `start_execution()` when new process starts
- Other approvals: Status updated explicitly in the response handler
**After:** Simplified to single flow:
1. **First:** Update status to `InProgress` for ALL approved/denied responses (lines 84-117)
2. **Then:** If exit_plan_mode, start new process (lines 119-133)
The `start_execution()` method has a guard (container.rs:582):
```rust
if task.status != TaskStatus::InProgress
&& run_reason != &ExecutionProcessRunReason::DevServer
{
Task::update_status(&self.db().pool, task.id, TaskStatus::InProgress).await?;
}
```
**Flow for exit_plan_mode:**
1. User approves plan → `respond_to_approval()` sets status to `InProgress` ✓
2. `exit_plan_mode_tool()` calls `start_execution()` with the follow-up action
3. `start_execution()` sees status is already `InProgress` → skips duplicate update ✓
4. Profile changes from "plan" variant to default via `to_default_variant()` ✓
5. New process starts with correct permissions ✓
✅ **Simpler code:** Removed conditional branching
✅ **Consistent behavior:** All approvals handled uniformly
✅ **Immediate status update:** Status changes on response, not waiting for process start
✅ **No duplicate updates:** Guard prevents redundant database writes
✅ **Profile change preserved:** exit_plan_mode still changes profile correctly
All changes compile and pass clippy checks with `-D warnings`!
Cleanup script changes for task attempt f091024c-6c72-493e-9fe3-db2bd523cc7b
Perfect! All review comments have been addressed. Here's a summary of the changes:
- Removed redundant comments about status updates
- Code is now self-explanatory
**services/approvals.rs (Line 133):**
- **Before:** Three separate database calls to get execution_process → task_attempt → task
- **After:** Single call to `ExecutionProcess::load_context()` which loads all context
- Removed unnecessary imports: `TaskAttempt`
**server/approvals.rs (Line 89):**
- **Before:** Three separate database calls
- **After:** Single call to `ExecutionProcess::load_context()`
- Removed unnecessary imports: `TaskAttempt`
✅ **Cleaner code:** Removed redundant comments
✅ **Better performance:** Reduced database calls from 3 to 1 per status update
✅ **More maintainable:** Using existing utility function instead of duplicating logic
✅ **Fewer imports:** Removed unused `TaskAttempt` import from both files
All changes compile successfully and pass clippy checks with `-D warnings`!
Perfect! All checks pass. The review comment has been addressed. Here's a summary:
**Issue:** The `tracing::info!` success logs didn't make sense - they logged expected behavior that happens frequently.
**Changes Made:**
1. **Removed success logs from both files:**
- `crates/services/src/services/approvals.rs`: Removed info log for "Updated task to InReview"
- `crates/server/src/routes/approvals.rs`: Removed info log for "Updated task to InProgress"
2. **Kept only failure logs:**
- Both files still log warnings when status updates fail
- This matches the pattern used elsewhere in the codebase (deployment.rs, container.rs)
3. **Collapsed if statements:**
- Used let-chains to collapse nested if statements
- Now only logs when there's an actual error to report
**Result:**
- Cleaner, less noisy logs
- Only logs when something goes wrong (warnings)
- Consistent with existing codebase patterns
- All clippy checks pass with `-D warnings`
* move logic into service layer
---------
Co-authored-by: Gabriel Gordon-Hall <ggordonhall@gmail.com>
This commit is contained in:
@@ -63,7 +63,7 @@ pub async fn respond_to_approval(
|
||||
) -> Result<Json<ApprovalStatus>, StatusCode> {
|
||||
let service = deployment.approvals();
|
||||
|
||||
match service.respond(&id, request).await {
|
||||
match service.respond(&deployment.db().pool, &id, request).await {
|
||||
Ok((status, context)) => {
|
||||
deployment
|
||||
.track_if_analytics_allowed(
|
||||
@@ -79,19 +79,17 @@ pub async fn respond_to_approval(
|
||||
|
||||
if matches!(status, ApprovalStatus::Approved)
|
||||
&& context.tool_name == EXIT_PLAN_MODE_TOOL_NAME
|
||||
{
|
||||
// If exiting plan mode, automatically start a new execution process with different
|
||||
// permissions
|
||||
if let Ok(ctx) = ExecutionProcess::load_context(
|
||||
&& let Ok(ctx) = ExecutionProcess::load_context(
|
||||
&deployment.db().pool,
|
||||
context.execution_process_id,
|
||||
)
|
||||
.await
|
||||
&& let Err(e) = deployment.container().exit_plan_mode_tool(ctx).await
|
||||
{
|
||||
tracing::error!("failed to exit plan mode: {:?}", e);
|
||||
return Err(StatusCode::INTERNAL_SERVER_ERROR);
|
||||
}
|
||||
&& let Err(e) = deployment.container().exit_plan_mode_tool(ctx).await
|
||||
{
|
||||
tracing::error!("failed to exit plan mode: {:?}", e);
|
||||
return Err(StatusCode::INTERNAL_SERVER_ERROR);
|
||||
}
|
||||
|
||||
Ok(Json(status))
|
||||
|
||||
@@ -2,7 +2,11 @@ use std::{collections::HashMap, sync::Arc, time::Duration as StdDuration};
|
||||
|
||||
use chrono::{DateTime, Utc};
|
||||
use dashmap::DashMap;
|
||||
use db::models::executor_session::ExecutorSession;
|
||||
use db::models::{
|
||||
execution_process::ExecutionProcess,
|
||||
executor_session::ExecutorSession,
|
||||
task::{Task, TaskStatus},
|
||||
};
|
||||
use executors::logs::{
|
||||
NormalizedEntry, NormalizedEntryType, ToolStatus,
|
||||
utils::patch::{ConversationPatch, extract_normalized_entry_from_patch},
|
||||
@@ -125,6 +129,17 @@ impl Approvals {
|
||||
}
|
||||
};
|
||||
|
||||
// Move the task to InReview if it's still InProgress
|
||||
if let Ok(ctx) = ExecutionProcess::load_context(pool, execution_process_id).await
|
||||
&& ctx.task.status == TaskStatus::InProgress
|
||||
&& let Err(e) = Task::update_status(pool, ctx.task.id, TaskStatus::InReview).await
|
||||
{
|
||||
tracing::warn!(
|
||||
"Failed to update task status to InReview for approval request: {}",
|
||||
e
|
||||
);
|
||||
}
|
||||
|
||||
let request = ApprovalRequest::from_create(payload, execution_process_id);
|
||||
self.create(request).await
|
||||
}
|
||||
@@ -132,6 +147,7 @@ impl Approvals {
|
||||
#[tracing::instrument(skip(self, id, req))]
|
||||
pub async fn respond(
|
||||
&self,
|
||||
pool: &SqlitePool,
|
||||
id: &str,
|
||||
req: ApprovalResponse,
|
||||
) -> Result<(ApprovalStatus, ToolContext), ApprovalError> {
|
||||
@@ -160,6 +176,22 @@ impl Approvals {
|
||||
tool_name: p.tool_name,
|
||||
execution_process_id: p.execution_process_id,
|
||||
};
|
||||
|
||||
// If approved or denied, and task is still InReview, move back to InProgress
|
||||
if matches!(
|
||||
req.status,
|
||||
ApprovalStatus::Approved | ApprovalStatus::Denied { .. }
|
||||
) && let Ok(ctx) =
|
||||
ExecutionProcess::load_context(pool, tool_ctx.execution_process_id).await
|
||||
&& ctx.task.status == TaskStatus::InReview
|
||||
&& let Err(e) = Task::update_status(pool, ctx.task.id, TaskStatus::InProgress).await
|
||||
{
|
||||
tracing::warn!(
|
||||
"Failed to update task status to InProgress after approval response: {}",
|
||||
e
|
||||
);
|
||||
}
|
||||
|
||||
Ok((req.status, tool_ctx))
|
||||
} else if self.completed.contains_key(id) {
|
||||
Err(ApprovalError::AlreadyCompleted)
|
||||
|
||||
Reference in New Issue
Block a user