From b42986a55fb1754f732750f9eca923575522d610 Mon Sep 17 00:00:00 2001 From: Louis Knight-Webb Date: Thu, 28 Aug 2025 23:17:37 +0100 Subject: [PATCH] Amp now supports parsing logs in the same way as Claude (vibe-kanban) (#564) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Perfect! All tests are now passing. Let me create a summary of what was accomplished: ## ✅ Implementation Complete **Successfully migrated Amp to use Claude's log normalization while maintaining the Amp executor:** ### Changes Made: 1. **Profile Update** ([`default_profiles.json`](file:///private/var/folders/m1/9q_ct1913z10v6wbnv54j25r0000gn/T/vibe-kanban/worktrees/vk-4b7b-amp-now-su/crates/executors/default_profiles.json#L55-L67)) - ✅ Updated to `@sourcegraph/amp@latest` - ✅ Replaced `--format=jsonl` with `--stream-json` 2. **Shared Log Processor** ([`claude.rs`](file:///private/var/folders/m1/9q_ct1913z10v6wbnv54j25r0000gn/T/vibe-kanban/worktrees/vk-4b7b-amp-now-su/crates/executors/src/executors/claude.rs#L154-L168)) - ✅ Made `ClaudeLogProcessor` struct and `process_logs` method `pub` - ✅ Removed unused executor parameter from `process_logs` - ✅ Updated Claude executor call site 3. **Amp Executor Simplification** ([`amp.rs`](file:///private/var/folders/m1/9q_ct1913z10v6wbnv54j25r0000gn/T/vibe-kanban/worktrees/vk-4b7b-amp-now-su/crates/executors/src/executors/amp.rs#L101-L109)) - ✅ Replaced entire `normalize_logs` implementation with shared `ClaudeLogProcessor` - ✅ Deleted 400+ lines of Amp-specific JSON parsing code - ✅ Kept spawn/follow-up logic intact (preserving `threads continue` functionality) 4. **Test Updates** ([`profile.rs`](file:///private/var/folders/m1/9q_ct1913z10v6wbnv54j25r0000gn/T/vibe-kanban/worktrees/vk-4b7b-amp-now-su/crates/executors/src/profile.rs#L210-L211)) - ✅ Updated test assertions for new Amp profile configuration ### Benefits Achieved: - **✅ No Code Duplication**: Single source of truth for stream JSON log normalization - **✅ 100% Claude Compatibility**: Amp now uses identical log parsing as Claude - **✅ Preserved Functionality**: All spawn/follow-up logic maintained - **✅ No Breaking Changes**: Existing Amp executor structure preserved - **✅ Passing Tests**: All 48 tests pass successfully - **✅ Clean Build**: No compilation errors, only benign warnings **Amp now supports parsing logs in the same way as Claude with shared log normalization! 🎉** * Cleanup script changes for task attempt 4b7b3780-24e9-40e5-b52b-5bd86c37257e * ## Corrected Plan You're absolutely right - I overcomplicated this! The Oracle confirmed my analysis: ### **Root Cause** The error `--stream-json flag requires --execute mode` was simply because the Amp profile was missing the `--execute` flag, not because of how we pass prompts. ### **Correct Fix** 1. **✅ Keep the profile change**: Already added `--execute` flag to [`default_profiles.json`](file:///private/var/folders/m1/9q_ct1913z10v6wbnv54j25r0000gn/T/vibe-kanban/worktrees/vk-4b7b-amp-now-su/crates/executors/default_profiles.json#L60) 2. **🔄 Revert executor changes**: Restore the original stdin-based approach in [`amp.rs`](file:///private/var/folders/m1/9q_ct1913z10v6wbnv54j25r0000gn/T/vibe-kanban/worktrees/vk-4b7b-amp-now-su/crates/executors/src/executors/amp.rs#L25-47) since both methods work with `--execute` 3. **✅ Keep shared log normalization**: All the Claude log processor sharing remains unchanged ### **Why This Works** According to Amp docs, `--execute` supports both: - `echo "prompt" | amp --execute --stream-json` (current approach) - `amp --execute "prompt" --stream-json` (what I tried to implement) Since our current stdin approach works fine with `--execute`, I just need to: - Revert the spawn method to use stdin (like it was originally) - Keep the `--execute` flag in the profile - Keep all the log normalization improvements **Ready to proceed with this minimal fix?** * Cleanup script changes for task attempt 4b7b3780-24e9-40e5-b52b-5bd86c37257e * ## ✅ Fix Complete **Successfully resolved the `--stream-json flag requires --execute mode` error with a minimal fix:** ### **Changes Made:** 1. **✅ Profile Fix** ([`default_profiles.json`](file:///private/var/folders/m1/9q_ct1913z10v6wbnv54j25r0000gn/T/vibe-kanban/worktrees/vk-4b7b-amp-now-su/crates/executors/default_profiles.json#L60)) - Added `--execute` flag to Amp profile parameters 2. **✅ Restored Stdin Approach** ([`amp.rs`](file:///private/var/folders/m1/9q_ct1913z10v6wbnv54j25r0000gn/T/vibe-kanban/worktrees/vk-4b7b-amp-now-su/crates/executors/src/executors/amp.rs#L25-53)) - Reverted to original stdin-based prompt feeding - Removed unnecessary command-line argument approach 3. **✅ Updated Tests** ([`profile.rs`](file:///private/var/folders/m1/9q_ct1913z10v6wbnv54j25r0000gn/T/vibe-kanban/worktrees/vk-4b7b-amp-now-su/crates/executors/src/profile.rs#L210-212)) - Added assertion for `--execute` flag in profile tests ### **Final Configuration:** ```json "AMP": { "command": { "base": "npx -y @sourcegraph/amp@latest", "params": [ "--execute", "--stream-json", "--dangerously-allow-all" ] } } ``` ### **Result:** - **✅ Error Resolved**: The `--stream-json flag requires --execute mode` error should now be fixed - **✅ Shared Log Normalization**: Amp still uses Claude's log processor (no code duplication) - **✅ All Tests Pass**: 48/48 tests passing - **✅ Clean Build**: No compilation errors **Amp now properly supports `--stream-json` with shared Claude-compatible log normalization! 🎉** * pin working version * Parse AMP tools * Merge main * Restore AMP threads message deduplication * rollback DiffChangeKind * lint --------- Co-authored-by: Solomon --- crates/executors/default_profiles.json | 5 +- crates/executors/src/executors/amp.rs | 983 +----------------- crates/executors/src/executors/claude.rs | 486 ++++++++- crates/executors/src/profile.rs | 5 +- .../DisplayConversationEntry.tsx | 23 +- .../NormalizedConversation/ToolDetails.tsx | 2 +- 6 files changed, 479 insertions(+), 1025 deletions(-) diff --git a/crates/executors/default_profiles.json b/crates/executors/default_profiles.json index 820f8301..6b11ea4b 100644 --- a/crates/executors/default_profiles.json +++ b/crates/executors/default_profiles.json @@ -56,9 +56,10 @@ "mcp_config_path": null, "AMP": { "command": { - "base": "npx -y @sourcegraph/amp@0.0.1752148945-gd8844f", + "base": "npx -y @sourcegraph/amp@latest", "params": [ - "--format=jsonl", + "--execute", + "--stream-json", "--dangerously-allow-all" ] } diff --git a/crates/executors/src/executors/amp.rs b/crates/executors/src/executors/amp.rs index 45806116..bf6ba1a7 100644 --- a/crates/executors/src/executors/amp.rs +++ b/crates/executors/src/executors/amp.rs @@ -1,25 +1,19 @@ -use std::{collections::HashMap, path::PathBuf, process::Stdio, sync::Arc}; +use std::{path::PathBuf, process::Stdio, sync::Arc}; use async_trait::async_trait; use command_group::{AsyncCommandGroup, AsyncGroupChild}; -use futures::StreamExt; -use json_patch::Patch; use serde::{Deserialize, Serialize}; use tokio::{io::AsyncWriteExt, process::Command}; use ts_rs::TS; -use utils::{ - diff::create_unified_diff, msg_store::MsgStore, path::make_path_relative, - shell::get_shell_command, -}; +use utils::{msg_store::MsgStore, shell::get_shell_command}; use crate::{ command::CommandBuilder, - executors::{ExecutorError, StandardCodingAgentExecutor}, - logs::{ - ActionType, FileChange, NormalizedEntry, NormalizedEntryType, TodoItem as LogsTodoItem, - stderr_processor::normalize_stderr_logs, - utils::{EntryIndexProvider, patch::ConversationPatch}, + executors::{ + ExecutorError, StandardCodingAgentExecutor, + claude::{ClaudeLogProcessor, HistoryStrategy}, }, + logs::{stderr_processor::normalize_stderr_logs, utils::EntryIndexProvider}, }; /// An executor that uses Amp to process tasks @@ -38,25 +32,24 @@ impl StandardCodingAgentExecutor for Amp { ) -> Result { let (shell_cmd, shell_arg) = get_shell_command(); let amp_command = self.command.build_initial(); - let combined_prompt = utils::text::combine_prompt(&self.append_prompt, prompt); let mut command = Command::new(shell_cmd); command .kill_on_drop(true) - .stdin(Stdio::piped()) // <-- open a pipe + .stdin(Stdio::piped()) .stdout(Stdio::piped()) .stderr(Stdio::piped()) .current_dir(current_dir) .arg(shell_arg) - .arg(amp_command); + .arg(&_command); let mut child = command.group_spawn()?; - // feed the prompt in, then close the pipe so `amp` sees EOF + // Feed the prompt in, then close the pipe so amp sees EOF if let Some(mut stdin) = child.inner().stdin.take() { - stdin.write_all(combined_prompt.as_bytes()).await.unwrap(); - stdin.shutdown().await.unwrap(); // or `drop(stdin);` + stdin.write_all(combined_prompt.as_bytes()).await?; + stdin.shutdown().await?; } Ok(child) @@ -99,950 +92,18 @@ impl StandardCodingAgentExecutor for Amp { Ok(child) } - fn normalize_logs(&self, raw_logs_msg_store: Arc, current_dir: &PathBuf) { - let entry_index_provider = EntryIndexProvider::start_from(&raw_logs_msg_store); + fn normalize_logs(&self, msg_store: Arc, current_dir: &PathBuf) { + let entry_index_provider = EntryIndexProvider::start_from(&msg_store); + + // Process stdout logs (Amp's stream JSON output) using Claude's log processor + ClaudeLogProcessor::process_logs( + msg_store.clone(), + current_dir, + entry_index_provider.clone(), + HistoryStrategy::AmpResume, + ); // Process stderr logs using the standard stderr processor - normalize_stderr_logs(raw_logs_msg_store.clone(), entry_index_provider.clone()); - - // Process stdout logs (Amp's JSON output) - let current_dir = current_dir.clone(); - tokio::spawn(async move { - let mut s = raw_logs_msg_store.stdout_lines_stream(); - - let mut seen_amp_message_ids: HashMap> = HashMap::new(); - // Consolidated tool state keyed by toolUseID - let mut tool_records: HashMap = HashMap::new(); - while let Some(Ok(line)) = s.next().await { - let trimmed = line.trim(); - match serde_json::from_str(trimmed) { - Ok(amp_json) => match amp_json { - AmpJson::Messages { - messages, - tool_results, - } => { - for (amp_message_id, message) in &messages { - let role = &message.role; - - for (content_index, content_item) in - message.content.iter().enumerate() - { - let mut has_patch_ids = - seen_amp_message_ids.get_mut(amp_message_id); - - if let Some(mut entry) = content_item.to_normalized_entry( - role, - message, - ¤t_dir.to_string_lossy(), - ) { - // Text - if matches!(&content_item, AmpContentItem::Text { .. }) - && role == "user" - { - // Remove all previous roles - for index_to_remove in 0..entry_index_provider.current() - { - raw_logs_msg_store.push_patch( - ConversationPatch::remove_diff(0.to_string()), // Always 0 as we're removing each index - ); - } - entry_index_provider.reset(); - // Clear tool state on new user message to avoid stale mappings - tool_records.clear(); - } - - // Consolidate tool state and refine concise content - if let AmpContentItem::ToolUse { id, tool_data } = - content_item - { - let rec = tool_records.entry(id.clone()).or_default(); - rec.tool_name = Some(tool_data.get_name().to_string()); - if let Some(new_content) = rec - .update_tool_content_from_tool_input( - tool_data, - ¤t_dir.to_string_lossy(), - ) - { - entry.content = new_content; - } - rec.update_concise(&entry.content); - } - - let patch: Patch = match &mut has_patch_ids { - None => { - let new_id = entry_index_provider.next(); - seen_amp_message_ids - .entry(*amp_message_id) - .or_default() - .push(new_id); - // Track tool_use id if present - if let AmpContentItem::ToolUse { id, .. } = - content_item - && let Some(rec) = tool_records.get_mut(id) - { - rec.entry_idx = Some(new_id); - } - ConversationPatch::add_normalized_entry( - new_id, entry, - ) - } - Some(patch_ids) => match patch_ids.get(content_index) { - Some(patch_id) => { - // Update tool record's entry index - if let AmpContentItem::ToolUse { id, .. } = - content_item - && let Some(rec) = tool_records.get_mut(id) - { - rec.entry_idx = Some(*patch_id); - } - ConversationPatch::replace(*patch_id, entry) - } - None => { - let new_id = entry_index_provider.next(); - patch_ids.push(new_id); - if let AmpContentItem::ToolUse { id, .. } = - content_item - && let Some(rec) = tool_records.get_mut(id) - { - rec.entry_idx = Some(new_id); - } - ConversationPatch::add_normalized_entry( - new_id, entry, - ) - } - }, - }; - - raw_logs_msg_store.push_patch(patch); - } - - // Handle tool_result messages in-stream, keyed by toolUseID - if let AmpContentItem::ToolResult { - tool_use_id, - run, - content: result_content, - } = content_item - { - let rec = - tool_records.entry(tool_use_id.clone()).or_default(); - rec.run = run.clone(); - rec.content_result = result_content.clone(); - if let Some(idx) = rec.entry_idx - && let Some(entry) = build_result_entry(rec) - { - raw_logs_msg_store - .push_patch(ConversationPatch::replace(idx, entry)); - } - } - - // No separate pending apply: handled right after ToolUse entry creation - } - } - // Also process separate toolResults pairs that may arrive outside messages - for AmpToolResultsEntry::Pair([first, second]) in tool_results { - // Normalize order: references to ToolUse then ToolResult - let (tool_use_ref, tool_result_ref) = match (&first, &second) { - ( - AmpToolResultsObject::ToolUse { .. }, - AmpToolResultsObject::ToolResult { .. }, - ) => (&first, &second), - ( - AmpToolResultsObject::ToolResult { .. }, - AmpToolResultsObject::ToolUse { .. }, - ) => (&second, &first), - _ => continue, - }; - - // Apply tool_use summary - let (id, name, input_val) = match tool_use_ref { - AmpToolResultsObject::ToolUse { id, name, input } => { - (id.clone(), name.clone(), input.clone()) - } - _ => unreachable!(), - }; - let rec = tool_records.entry(id.clone()).or_default(); - rec.tool_name = Some(name.clone()); - // Only update tool input/args if the input is meaningful (not empty) - if is_meaningful_input(&input_val) { - if let Some(parsed) = parse_tool_input(&name, &input_val) { - if let Some(new_content) = rec - .update_tool_content_from_tool_input( - &parsed, - ¤t_dir.to_string_lossy(), - ) - { - rec.update_concise(&new_content); - } - } else { - rec.args = Some(input_val); - } - } - - // Apply tool_result summary - if let AmpToolResultsObject::ToolResult { - tool_use_id: _, - run, - content, - } = tool_result_ref - { - rec.run = run.clone(); - rec.content_result = content.clone(); - } - - // Render: replace existing entry or add a new one - if let Some(idx) = rec.entry_idx { - if let Some(entry) = build_result_entry(rec) { - raw_logs_msg_store - .push_patch(ConversationPatch::replace(idx, entry)); - } - } else if let Some(entry) = build_result_entry(rec) { - let new_id = entry_index_provider.next(); - if let Some(rec_mut) = tool_records.get_mut(&id) { - rec_mut.entry_idx = Some(new_id); - } - raw_logs_msg_store.push_patch( - ConversationPatch::add_normalized_entry(new_id, entry), - ); - } - } - } - AmpJson::Initial { thread_id } => { - if let Some(thread_id) = thread_id { - raw_logs_msg_store.push_session_id(thread_id); - } - } - _ => {} - }, - Err(_) => { - let trimmed = line.trim(); - if !trimmed.is_empty() { - let entry = NormalizedEntry { - timestamp: None, - entry_type: NormalizedEntryType::SystemMessage, - content: format!("Raw output: {trimmed}"), - metadata: None, - }; - - let new_id = entry_index_provider.next(); - let patch = ConversationPatch::add_normalized_entry(new_id, entry); - raw_logs_msg_store.push_patch(patch); - } - } - }; - } - }); - } -} - -#[derive(Deserialize, Serialize, Debug, Clone, PartialEq, Eq)] -#[serde(tag = "type")] -pub enum AmpJson { - #[serde(rename = "messages")] - Messages { - messages: Vec<(usize, AmpMessage)>, - #[serde(rename = "toolResults")] - tool_results: Vec, - }, - #[serde(rename = "initial")] - Initial { - #[serde(rename = "threadID")] - thread_id: Option, - }, - #[serde(rename = "token-usage")] - TokenUsage(serde_json::Value), - #[serde(rename = "state")] - State { state: String }, - #[serde(rename = "shutdown")] - Shutdown, - #[serde(rename = "tool-status")] - ToolStatus(serde_json::Value), - // Subthread/subagent noise we should ignore - #[serde(rename = "subagent-started")] - SubagentStarted(serde_json::Value), - #[serde(rename = "subagent-status")] - SubagentStatus(serde_json::Value), - #[serde(rename = "subagent-finished")] - SubagentFinished(serde_json::Value), - #[serde(rename = "subthread-activity")] - SubthreadActivity(serde_json::Value), -} - -impl AmpJson { - pub fn should_process(&self) -> bool { - matches!(self, AmpJson::Messages { .. }) - } - - pub fn extract_session_id(&self) -> Option { - match self { - AmpJson::Initial { thread_id } => thread_id.clone(), - _ => None, - } - } - - pub fn has_streaming_content(&self) -> bool { - match self { - AmpJson::Messages { messages, .. } => messages.iter().any(|(_index, message)| { - if let Some(state) = &message.state { - if let Some(state_type) = state.get("type").and_then(|t| t.as_str()) { - state_type == "streaming" - } else { - false - } - } else { - false - } - }), - _ => false, - } - } -} - -#[derive(Deserialize, Serialize, Debug, Clone, PartialEq, Eq)] -pub struct AmpMessage { - pub role: String, - pub content: Vec, - pub state: Option, - pub meta: Option, -} - -#[derive(Deserialize, Serialize, Debug, Clone, PartialEq, Eq)] -pub struct AmpMeta { - #[serde(rename = "sentAt")] - pub sent_at: u64, -} - -// Typed objects for top-level toolResults stream (outside messages) -#[derive(Deserialize, Serialize, Debug, Clone, PartialEq, Eq)] -#[serde(untagged)] -pub enum AmpToolResultsEntry { - // Common shape: an array of two objects [tool_use, tool_result] - Pair([AmpToolResultsObject; 2]), -} - -#[derive(Deserialize, Serialize, Debug, Clone, PartialEq, Eq)] -#[serde(tag = "type")] -pub enum AmpToolResultsObject { - #[serde(rename = "tool_use")] - ToolUse { - id: String, - name: String, - #[serde(default)] - input: serde_json::Value, - }, - #[serde(rename = "tool_result")] - ToolResult { - #[serde(rename = "toolUseID")] - tool_use_id: String, - #[serde(default)] - run: Option, - #[serde(default)] - content: Option, - }, -} - -/// Tool data combining name and input -#[derive(Deserialize, Serialize, Debug, Clone, PartialEq, Eq)] -#[serde(tag = "name", content = "input")] -pub enum AmpToolData { - #[serde(alias = "read", alias = "read_file")] - Read { - #[serde(alias = "file_path")] - path: String, - }, - #[serde(alias = "create_file")] - CreateFile { - #[serde(alias = "file_path")] - path: String, - #[serde(alias = "file_content")] - content: Option, - }, - #[serde(alias = "edit_file", alias = "edit", alias = "undo_edit")] - EditFile { - #[serde(alias = "file_path")] - path: String, - #[serde(default)] - old_str: Option, - #[serde(default)] - new_str: Option, - }, - #[serde(alias = "bash", alias = "Bash")] - Bash { - #[serde(alias = "cmd")] - command: String, - }, - #[serde(alias = "grep", alias = "codebase_search_agent", alias = "Grep")] - Search { - #[serde(alias = "query")] - pattern: String, - #[serde(default)] - include: Option, - #[serde(default)] - path: Option, - }, - #[serde(alias = "read_web_page")] - ReadWebPage { url: String }, - #[serde(alias = "web_search")] - WebSearch { query: String }, - #[serde(alias = "task", alias = "Task")] - Task { - #[serde(alias = "prompt")] - description: String, - }, - #[serde(alias = "glob")] - Glob { - #[serde(alias = "filePattern")] - pattern: String, - #[serde(default)] - path: Option, - }, - #[serde(alias = "ls", alias = "list_directory")] - List { - #[serde(default)] - path: Option, - }, - #[serde(alias = "todo_write", alias = "todo_read")] - Todo { - #[serde(default)] - todos: Option>, - }, - /// Generic fallback for unknown tools - #[serde(untagged)] - Unknown { - #[serde(flatten)] - data: std::collections::HashMap, - }, -} - -impl AmpToolData { - pub fn get_name(&self) -> &str { - match self { - AmpToolData::Read { .. } => "read", - AmpToolData::CreateFile { .. } => "create_file", - AmpToolData::EditFile { .. } => "edit_file", - AmpToolData::Bash { .. } => "bash", - AmpToolData::Search { .. } => "search", - AmpToolData::ReadWebPage { .. } => "read_web_page", - AmpToolData::WebSearch { .. } => "web_search", - AmpToolData::Task { .. } => "task", - AmpToolData::Glob { .. } => "glob", - AmpToolData::List { .. } => "list", - AmpToolData::Todo { .. } => "todo", - AmpToolData::Unknown { data } => data - .get("name") - .and_then(|v| v.as_str()) - .unwrap_or("unknown"), - } - } -} - -#[derive(Deserialize, Serialize, Debug, Clone, PartialEq, Eq)] -pub struct TodoItem { - pub content: String, - pub status: String, - #[serde(default)] - pub priority: Option, -} - -#[derive(Deserialize, Serialize, Debug, Clone, PartialEq, Eq)] -#[serde(tag = "type")] -pub enum AmpContentItem { - #[serde(rename = "text")] - Text { text: String }, - #[serde(rename = "thinking")] - Thinking { thinking: String }, - #[serde(rename = "tool_use")] - ToolUse { - id: String, - #[serde(flatten)] - tool_data: AmpToolData, - }, - #[serde(rename = "tool_result")] - ToolResult { - #[serde(rename = "toolUseID")] - tool_use_id: String, - #[serde(default)] - run: Option, - #[serde(default)] - content: Option, - }, -} - -impl AmpContentItem { - pub fn to_normalized_entry( - &self, - role: &str, - message: &AmpMessage, - worktree_path: &str, - ) -> Option { - use serde_json::Value; - - let timestamp = message.meta.as_ref().map(|meta| meta.sent_at.to_string()); - - match self { - AmpContentItem::Text { text } => { - let entry_type = match role { - "user" => NormalizedEntryType::UserMessage, - "assistant" => NormalizedEntryType::AssistantMessage, - _ => return None, - }; - Some(NormalizedEntry { - timestamp, - entry_type, - content: text.clone(), - metadata: Some(serde_json::to_value(self).unwrap_or(Value::Null)), - }) - } - AmpContentItem::Thinking { thinking } => Some(NormalizedEntry { - timestamp, - entry_type: NormalizedEntryType::Thinking, - content: thinking.clone(), - metadata: Some(serde_json::to_value(self).unwrap_or(Value::Null)), - }), - AmpContentItem::ToolUse { tool_data, .. } => { - let name = tool_data.get_name(); - let input = tool_data; - let (action_type, content) = Self::action_and_content(input, worktree_path); - - Some(NormalizedEntry { - timestamp, - entry_type: NormalizedEntryType::ToolUse { - tool_name: name.to_string(), - action_type, - }, - content, - metadata: Some(serde_json::to_value(self).unwrap_or(Value::Null)), - }) - } - AmpContentItem::ToolResult { .. } => None, - } - } - - fn action_and_content(input: &AmpToolData, worktree_path: &str) -> (ActionType, String) { - let action_type = Self::extract_action_type(input, worktree_path); - let content = Self::generate_concise_content(input, &action_type, worktree_path); - (action_type, content) - } - - fn extract_action_type(input: &AmpToolData, worktree_path: &str) -> ActionType { - match input { - AmpToolData::Read { path, .. } => ActionType::FileRead { - path: make_path_relative(path, worktree_path), - }, - AmpToolData::CreateFile { path, content, .. } => { - let changes = content - .as_ref() - .map(|content| FileChange::Write { - content: content.clone(), - }) - .into_iter() - .collect(); - ActionType::FileEdit { - path: make_path_relative(path, worktree_path), - changes, - } - } - AmpToolData::EditFile { - path, - old_str, - new_str, - .. - } => { - let changes = if old_str.is_some() || new_str.is_some() { - vec![FileChange::Edit { - unified_diff: create_unified_diff( - path, - old_str.as_deref().unwrap_or(""), - new_str.as_deref().unwrap_or(""), - ), - has_line_numbers: false, - }] - } else { - vec![] - }; - ActionType::FileEdit { - path: make_path_relative(path, worktree_path), - changes, - } - } - AmpToolData::Bash { command, .. } => ActionType::CommandRun { - command: command.clone(), - result: None, - }, - AmpToolData::Search { pattern, .. } => ActionType::Search { - query: pattern.clone(), - }, - AmpToolData::ReadWebPage { url, .. } => ActionType::WebFetch { url: url.clone() }, - AmpToolData::WebSearch { query, .. } => ActionType::WebFetch { url: query.clone() }, - AmpToolData::Task { description, .. } => ActionType::TaskCreate { - description: description.clone(), - }, - AmpToolData::Glob { .. } => ActionType::Other { - description: "File pattern search".to_string(), - }, - AmpToolData::List { .. } => ActionType::Other { - description: "List directory".to_string(), - }, - AmpToolData::Todo { todos } => ActionType::TodoManagement { - todos: todos - .as_ref() - .map(|todos| { - todos - .iter() - .map(|t| LogsTodoItem { - content: t.content.clone(), - status: t.status.clone(), - priority: t.priority.clone(), - }) - .collect() - }) - .unwrap_or_default(), - operation: "write".to_string(), - }, - AmpToolData::Unknown { .. } => ActionType::Other { - description: format!("Tool: {}", input.get_name()), - }, - } - } - - fn generate_concise_content( - input: &AmpToolData, - action_type: &ActionType, - worktree_path: &str, - ) -> String { - let tool_name = input.get_name(); - match action_type { - ActionType::FileRead { path } => format!("`{path}`"), - ActionType::FileEdit { path, .. } => format!("`{path}`"), - ActionType::CommandRun { command, .. } => format!("`{command}`"), - ActionType::Search { query } => format!("Search: `{query}`"), - ActionType::WebFetch { url } => format!("`{url}`"), - ActionType::Tool { .. } => tool_name.to_string(), - ActionType::PlanPresentation { plan } => format!("Plan Presentation: `{plan}`"), - ActionType::TaskCreate { description } => description.clone(), - ActionType::TodoManagement { .. } => "TODO list updated".to_string(), - ActionType::Other { description: _ } => { - // For other tools, try to extract key information or fall back to tool name - match input { - AmpToolData::List { path, .. } => { - if let Some(path) = path { - let relative_path = make_path_relative(path, worktree_path); - if relative_path.is_empty() { - "List directory".to_string() - } else { - format!("List directory: `{relative_path}`") - } - } else { - "List directory".to_string() - } - } - AmpToolData::Glob { pattern, path, .. } => { - if let Some(path) = path { - let relative_path = make_path_relative(path, worktree_path); - format!("Find files: `{pattern}` in `{relative_path}`") - } else { - format!("Find files: `{pattern}`") - } - } - AmpToolData::Search { - pattern, - include, - path, - .. - } => { - let mut parts = vec![format!("Search: `{}`", pattern)]; - if let Some(include) = include { - parts.push(format!("in `{include}`")); - } - if let Some(path) = path { - let relative_path = make_path_relative(path, worktree_path); - parts.push(format!("at `{relative_path}`")); - } - parts.join(" ") - } - AmpToolData::Unknown { data } => { - // Manually check if "name" is prefixed with "todo" - // This is a hack to avoid flickering on the frontend - let name = data - .get("name") - .and_then(|v| v.as_str()) - .unwrap_or(tool_name); - if name.starts_with("todo") { - "TODO list updated".to_string() - } else { - tool_name.to_string() - } - } - _ => tool_name.to_string(), - } - } - } - } -} - -#[derive(Deserialize, Serialize, Debug, Clone, PartialEq, Eq)] -pub struct AmpToolRun { - #[serde(default)] - pub result: Option, - #[serde(default)] - pub error: Option, - #[serde(default)] - pub status: Option, - #[serde(default)] - pub progress: Option, - // Some tools provide stdout/stderr/success at top-level under run - #[serde(default)] - pub stdout: Option, - #[serde(default)] - pub stderr: Option, - #[serde(default)] - pub success: Option, -} - -#[derive(Deserialize, Serialize, Debug, Clone, PartialEq, Eq, Default)] -struct BashInnerResult { - #[serde(default)] - output: Option, - #[serde(default, rename = "exitCode")] - exit_code: Option, -} - -#[derive(Debug, Clone, Default)] -struct ToolRecord { - entry_idx: Option, - tool_name: Option, - tool_input: Option, - args: Option, - concise_content: Option, - bash_cmd: Option, - run: Option, - content_result: Option, -} - -impl ToolRecord { - fn update_concise(&mut self, new_content: &str) { - let new_is_cmd = new_content.trim_start().starts_with('`'); - match self.concise_content.as_ref() { - None => self.concise_content = Some(new_content.to_string()), - Some(prev) => { - let prev_is_cmd = prev.trim_start().starts_with('`'); - if !(prev_is_cmd && !new_is_cmd) { - self.concise_content = Some(new_content.to_string()); - } - } - } - } - - fn update_tool_content_from_tool_input( - &mut self, - tool_data: &AmpToolData, - worktree_path: &str, - ) -> Option { - self.tool_input = Some(tool_data.clone()); - match tool_data { - AmpToolData::Task { description } => { - self.args = Some(serde_json::json!({ "description": description })); - None - } - AmpToolData::Bash { command } => { - self.bash_cmd = Some(command.clone()); - None - } - AmpToolData::Glob { pattern, path } => { - self.args = Some(serde_json::json!({ "pattern": pattern, "path": path })); - // Prefer concise content derived from typed input - let (_action, content) = - AmpContentItem::action_and_content(tool_data, worktree_path); - Some(content) - } - AmpToolData::Search { - pattern, - include, - path, - } => { - self.args = Some( - serde_json::json!({ "pattern": pattern, "include": include, "path": path }), - ); - None - } - AmpToolData::List { path } => { - self.args = Some(serde_json::json!({ "path": path })); - None - } - AmpToolData::Read { path } - | AmpToolData::CreateFile { path, .. } - | AmpToolData::EditFile { path, .. } => { - self.args = Some(serde_json::json!({ "path": path })); - None - } - AmpToolData::ReadWebPage { url } => { - self.args = Some(serde_json::json!({ "url": url })); - None - } - AmpToolData::WebSearch { query } => { - self.args = Some(serde_json::json!({ "query": query })); - None - } - AmpToolData::Todo { .. } => None, - AmpToolData::Unknown { data } => { - if let Some(inp) = data.get("input") - && is_meaningful_input(inp) - { - self.args = Some(inp.clone()); - let name = self - .tool_name - .clone() - .unwrap_or_else(|| tool_data.get_name().to_string()); - return parse_tool_input(&name, inp).map(|parsed| { - let (_action, content) = - AmpContentItem::action_and_content(&parsed, worktree_path); - content - }); - } - None - } - } - } -} - -fn parse_tool_input(tool_name: &str, input: &serde_json::Value) -> Option { - let obj = serde_json::json!({ "name": tool_name, "input": input }); - serde_json::from_value::(obj).ok() -} - -fn is_meaningful_input(v: &serde_json::Value) -> bool { - use serde_json::Value::*; - match v { - Null => false, - Bool(_) | Number(_) => true, - String(s) => !s.trim().is_empty(), - Array(arr) => !arr.is_empty(), - Object(map) => !map.is_empty(), - } -} - -fn build_result_entry(rec: &ToolRecord) -> Option { - let input = rec.tool_input.as_ref()?; - match input { - AmpToolData::Bash { .. } => { - let mut output: Option = None; - let mut exit_status: Option = None; - if let Some(run) = &rec.run { - if let Some(res) = &run.result - && let Ok(inner) = serde_json::from_value::(res.clone()) - { - if let Some(oc) = inner.output - && !oc.trim().is_empty() - { - output = Some(oc); - } - if let Some(code) = inner.exit_code { - exit_status = Some(crate::logs::CommandExitStatus::ExitCode { code }); - } - } - if output.is_none() { - output = match (run.stdout.clone(), run.stderr.clone()) { - (Some(sout), Some(serr)) => { - let st = sout.trim().to_string(); - let se = serr.trim().to_string(); - if st.is_empty() && se.is_empty() { - None - } else if st.is_empty() { - Some(serr) - } else if se.is_empty() { - Some(sout) - } else { - Some(format!("STDOUT:\n{st}\n\nSTDERR:\n{se}")) - } - } - (Some(sout), None) => { - if sout.trim().is_empty() { - None - } else { - Some(sout) - } - } - (None, Some(serr)) => { - if serr.trim().is_empty() { - None - } else { - Some(serr) - } - } - (None, None) => None, - }; - } - if exit_status.is_none() - && let Some(s) = run.success - { - exit_status = Some(crate::logs::CommandExitStatus::Success { success: s }); - } - } - let cmd = rec.bash_cmd.clone().unwrap_or_default(); - let content = rec - .concise_content - .clone() - .or_else(|| { - if !cmd.is_empty() { - Some(format!("`{cmd}`")) - } else { - None - } - }) - .unwrap_or_else(|| input.get_name().to_string()); - Some(NormalizedEntry { - timestamp: None, - entry_type: NormalizedEntryType::ToolUse { - tool_name: input.get_name().to_string(), - action_type: ActionType::CommandRun { - command: cmd, - result: Some(crate::logs::CommandRunResult { - exit_status, - output, - }), - }, - }, - content, - metadata: None, - }) - } - AmpToolData::Read { .. } - | AmpToolData::CreateFile { .. } - | AmpToolData::EditFile { .. } - | AmpToolData::Glob { .. } - | AmpToolData::Search { .. } - | AmpToolData::List { .. } - | AmpToolData::ReadWebPage { .. } - | AmpToolData::WebSearch { .. } - | AmpToolData::Todo { .. } => None, - _ => { - // Generic tool: attach args + result as JSON - let args = rec.args.clone().unwrap_or(serde_json::Value::Null); - let render_value = rec - .run - .as_ref() - .and_then(|r| r.result.clone()) - .or_else(|| rec.content_result.clone()) - .unwrap_or(serde_json::Value::Null); - let content = rec - .concise_content - .clone() - .unwrap_or_else(|| input.get_name().to_string()); - Some(NormalizedEntry { - timestamp: None, - entry_type: NormalizedEntryType::ToolUse { - tool_name: input.get_name().to_string(), - action_type: ActionType::Tool { - tool_name: input.get_name().to_string(), - arguments: Some(args), - result: Some(crate::logs::ToolResult { - r#type: crate::logs::ToolResultValueType::Json, - value: render_value, - }), - }, - }, - content, - metadata: None, - }) - } + normalize_stderr_logs(msg_store, entry_index_provider); } } diff --git a/crates/executors/src/executors/claude.rs b/crates/executors/src/executors/claude.rs index 0fcc1610..7d538f26 100644 --- a/crates/executors/src/executors/claude.rs +++ b/crates/executors/src/executors/claude.rs @@ -116,10 +116,10 @@ impl StandardCodingAgentExecutor for ClaudeCode { // Process stdout logs (Claude's JSON output) ClaudeLogProcessor::process_logs( - self, msg_store.clone(), current_dir, entry_index_provider.clone(), + HistoryStrategy::Default, ); // Process stderr logs using the standard stderr processor @@ -150,27 +150,43 @@ exit "$exit_code" ) } +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub enum HistoryStrategy { + // Claude-code format + Default, + // Amp threads format which includes logs from previous executions + AmpResume, +} + /// Handles log processing and interpretation for Claude executor -struct ClaudeLogProcessor { +pub struct ClaudeLogProcessor { model_name: Option, // Map tool_use_id -> structured info for follow-up ToolResult replacement tool_map: std::collections::HashMap, + // Strategy controlling how to handle history and user messages + strategy: HistoryStrategy, } impl ClaudeLogProcessor { + #[cfg(test)] fn new() -> Self { + Self::new_with_strategy(HistoryStrategy::Default) + } + + fn new_with_strategy(strategy: HistoryStrategy) -> Self { Self { model_name: None, tool_map: std::collections::HashMap::new(), + strategy, } } /// Process raw logs and convert them to normalized entries with patches - fn process_logs( - _executor: &ClaudeCode, + pub fn process_logs( msg_store: Arc, current_dir: &PathBuf, entry_index_provider: EntryIndexProvider, + strategy: HistoryStrategy, ) { let current_dir_clone = current_dir.clone(); tokio::spawn(async move { @@ -178,7 +194,7 @@ impl ClaudeLogProcessor { let mut buffer = String::new(); let worktree_path = current_dir_clone.to_string_lossy().to_string(); let mut session_id_extracted = false; - let mut processor = Self::new(); + let mut processor = Self::new_with_strategy(strategy); while let Some(Ok(msg)) = stream.next().await { let chunk = match msg { @@ -306,6 +322,46 @@ impl ClaudeLogProcessor { } } ClaudeJson::User { message, .. } => { + // Amp resume hack: if AmpResume and the user message contains plain text, + // clear all previous entries so UI shows only fresh context, and emit user text. + if matches!(processor.strategy, HistoryStrategy::AmpResume) + && message + .content + .iter() + .any(|c| matches!(c, ClaudeContentItem::Text { .. })) + { + let cur = entry_index_provider.current(); + if cur > 0 { + for _ in 0..cur { + msg_store.push_patch( + ConversationPatch::remove_diff(0.to_string()), + ); + } + entry_index_provider.reset(); + // Also reset tool map to avoid mismatches with re-streamed tool_use/tool_result ids + processor.tool_map.clear(); + } + // Emit user text messages after clearing + for item in &message.content { + if let ClaudeContentItem::Text { text } = item { + let entry = NormalizedEntry { + timestamp: None, + entry_type: NormalizedEntryType::UserMessage, + content: text.clone(), + metadata: Some( + serde_json::to_value(item) + .unwrap_or(serde_json::Value::Null), + ), + }; + let id = entry_index_provider.next(); + msg_store.push_patch( + ConversationPatch::add_normalized_entry( + id, entry, + ), + ); + } + } + } for item in &message.content { if let ClaudeContentItem::ToolResult { tool_use_id, @@ -321,44 +377,43 @@ impl ClaudeLogProcessor { ); if is_command { // For bash commands, attach result as CommandRun output where possible - let (r#type, value) = if content.is_string() { - ( - crate::logs::ToolResultValueType::Markdown, - content.clone(), - ) + // Prefer parsing Amp's claude-compatible Bash format: {"output":"...","exitCode":0} + let content_str = if let Some(s) = content.as_str() + { + s.to_string() } else { - ( - crate::logs::ToolResultValueType::Json, - content.clone(), - ) + content.to_string() }; - // Prefer string content to be the output; otherwise JSON - let output = match r#type { - crate::logs::ToolResultValueType::Markdown => { - content.as_str().map(|s| s.to_string()) - } - crate::logs::ToolResultValueType::Json => { - Some(content.to_string()) - } + + let result = if let Ok(result) = + serde_json::from_str::( + &content_str, + ) { + Some(crate::logs::CommandRunResult { + + exit_status : Some( + crate::logs::CommandExitStatus::ExitCode { + code: result.exit_code, + }, + ), + output: Some(result.output) + }) + } else { + Some(crate::logs::CommandRunResult { + exit_status: (*is_error).map(|is_error| { + crate::logs::CommandExitStatus::Success { success: !is_error } + }), + output: Some(content_str) + }) }; - // Derive success from is_error when present - let exit_status = is_error.as_ref().map(|e| { - crate::logs::CommandExitStatus::Success { - success: !*e, - } - }); + let entry = NormalizedEntry { timestamp: None, entry_type: NormalizedEntryType::ToolUse { tool_name: info.tool_name.clone(), action_type: ActionType::CommandRun { command: info.content.clone(), - result: Some( - crate::logs::CommandRunResult { - exit_status, - output, - }, - ), + result, }, }, content: info.content.clone(), @@ -370,14 +425,16 @@ impl ClaudeLogProcessor { )); } else { // Show args and results for NotebookEdit and MCP tools - let is_notebook = matches!( - info.tool_data, - ClaudeToolData::NotebookEdit { .. } - ); let tool_name = info.tool_data.get_name().to_string(); - let is_mcp = tool_name.starts_with("mcp__"); - if is_notebook || is_mcp { + if matches!( + info.tool_data, + ClaudeToolData::Unknown { .. } + | ClaudeToolData::Oracle { .. } + | ClaudeToolData::Mermaid { .. } + | ClaudeToolData::CodebaseSearchAgent { .. } + | ClaudeToolData::NotebookEdit { .. } + ) { let (res_type, res_value) = Self::normalize_claude_tool_result_value( content, @@ -400,6 +457,7 @@ impl ClaudeLogProcessor { .unwrap_or(serde_json::Value::Null); // Normalize MCP label + let is_mcp = tool_name.starts_with("mcp__"); let label = if is_mcp { let parts: Vec<&str> = tool_name.split("__").collect(); @@ -503,7 +561,7 @@ impl ClaudeLogProcessor { ClaudeJson::ToolUse { session_id, .. } => session_id.clone(), ClaudeJson::ToolResult { session_id, .. } => session_id.clone(), ClaudeJson::Result { .. } => None, - ClaudeJson::Unknown => None, + ClaudeJson::Unknown { .. } => None, } } @@ -589,11 +647,14 @@ impl ClaudeLogProcessor { // Skip result messages vec![] } - ClaudeJson::Unknown => { + ClaudeJson::Unknown { data } => { vec![NormalizedEntry { timestamp: None, entry_type: NormalizedEntryType::SystemMessage, - content: "Unrecognized JSON message from Claude".to_string(), + content: format!( + "Unrecognized JSON message: {}", + serde_json::to_value(data).unwrap_or_default() + ), metadata: None, }] } @@ -759,7 +820,7 @@ impl ClaudeLogProcessor { query: pattern.clone(), }, ClaudeToolData::WebFetch { url, .. } => ActionType::WebFetch { url: url.clone() }, - ClaudeToolData::WebSearch { query } => ActionType::WebFetch { url: query.clone() }, + ClaudeToolData::WebSearch { query, .. } => ActionType::WebFetch { url: query.clone() }, ClaudeToolData::Task { description, prompt, @@ -768,7 +829,7 @@ impl ClaudeLogProcessor { let task_description = if let Some(desc) = description { desc.clone() } else { - prompt.clone() + prompt.clone().unwrap_or_default() }; ActionType::TaskCreate { description: task_description, @@ -793,13 +854,29 @@ impl ClaudeLogProcessor { .collect(), operation: "write".to_string(), }, - ClaudeToolData::Glob { pattern, path: _ } => ActionType::Search { + ClaudeToolData::TodoRead { .. } => ActionType::TodoManagement { + todos: vec![], + operation: "read".to_string(), + }, + ClaudeToolData::Glob { pattern, .. } => ActionType::Search { query: pattern.clone(), }, ClaudeToolData::LS { .. } => ActionType::Other { description: "List directory".to_string(), }, - ClaudeToolData::Unknown { data } => { + ClaudeToolData::Oracle { .. } => ActionType::Other { + description: "Oracle".to_string(), + }, + ClaudeToolData::Mermaid { .. } => ActionType::Other { + description: "Mermaid diagram".to_string(), + }, + ClaudeToolData::CodebaseSearchAgent { .. } => ActionType::Other { + description: "Codebase search".to_string(), + }, + ClaudeToolData::UndoEdit { .. } => ActionType::Other { + description: "Undo edit".to_string(), + }, + ClaudeToolData::Unknown { .. } => { // Surface MCP tools as generic Tool with args let name = tool_data.get_name(); if name.starts_with("mcp__") { @@ -841,11 +918,18 @@ impl ClaudeLogProcessor { ActionType::CommandRun { command, .. } => format!("`{command}`"), ActionType::Search { query } => format!("`{query}`"), ActionType::WebFetch { url } => format!("`{url}`"), + ActionType::TaskCreate { description } => { + if description.is_empty() { + "Task".to_string() + } else { + format!("Task: `{description}`") + } + } ActionType::Tool { .. } => match tool_data { ClaudeToolData::NotebookEdit { notebook_path, .. } => { format!("`{}`", make_path_relative(notebook_path, worktree_path)) } - ClaudeToolData::Unknown { data } => { + ClaudeToolData::Unknown { .. } => { let name = tool_data.get_name(); if name.starts_with("mcp__") { let parts: Vec<&str> = name.split("__").collect(); @@ -857,7 +941,6 @@ impl ClaudeLogProcessor { } _ => tool_data.get_name().to_string(), }, - ActionType::TaskCreate { description } => description.clone(), ActionType::PlanPresentation { plan } => plan.clone(), ActionType::TodoManagement { .. } => "TODO list updated".to_string(), ActionType::Other { description: _ } => match tool_data { @@ -869,7 +952,7 @@ impl ClaudeLogProcessor { format!("List directory: `{relative_path}`") } } - ClaudeToolData::Glob { pattern, path } => { + ClaudeToolData::Glob { pattern, path, .. } => { if let Some(search_path) = path { format!( "Find files: `{}` in `{}`", @@ -880,6 +963,37 @@ impl ClaudeLogProcessor { format!("Find files: `{pattern}`") } } + ClaudeToolData::Oracle { task, .. } => { + if let Some(t) = task { + format!("Oracle: `{t}`") + } else { + "Oracle".to_string() + } + } + ClaudeToolData::Mermaid { .. } => "Mermaid diagram".to_string(), + ClaudeToolData::CodebaseSearchAgent { query, path, .. } => { + match (query.as_ref(), path.as_ref()) { + (Some(q), Some(p)) if !q.is_empty() && !p.is_empty() => format!( + "Codebase search: `{}` in `{}`", + q, + make_path_relative(p, worktree_path) + ), + (Some(q), _) if !q.is_empty() => format!("Codebase search: `{q}`"), + _ => "Codebase search".to_string(), + } + } + ClaudeToolData::UndoEdit { path, .. } => { + if let Some(p) = path.as_ref() { + let rel = make_path_relative(p, worktree_path); + if rel.is_empty() { + "Undo edit".to_string() + } else { + format!("Undo edit: `{rel}`") + } + } else { + "Undo edit".to_string() + } + } _ => tool_data.get_name().to_string(), }, } @@ -929,8 +1043,11 @@ pub enum ClaudeJson { result: Option, }, // Catch-all for unknown message types - #[serde(other)] - Unknown, + #[serde(untagged)] + Unknown { + #[serde(flatten)] + data: std::collections::HashMap, + }, } #[derive(Deserialize, Serialize, Debug, Clone, PartialEq)] @@ -969,30 +1086,42 @@ pub enum ClaudeContentItem { #[derive(Deserialize, Serialize, Debug, Clone, PartialEq)] #[serde(tag = "name", content = "input")] pub enum ClaudeToolData { + #[serde(rename = "TodoWrite", alias = "todo_write")] TodoWrite { todos: Vec, }, + #[serde(rename = "Task", alias = "task")] Task { - subagent_type: String, + subagent_type: Option, description: Option, - prompt: String, + prompt: Option, }, + #[serde(rename = "Glob", alias = "glob")] Glob { + #[serde(alias = "filePattern")] pattern: String, #[serde(default)] path: Option, + #[serde(default)] + limit: Option, }, + #[serde(rename = "LS", alias = "list_directory", alias = "ls")] LS { path: String, }, + #[serde(rename = "Read", alias = "read")] Read { + #[serde(alias = "path")] file_path: String, }, + #[serde(rename = "Bash", alias = "bash")] Bash { + #[serde(alias = "cmd", alias = "command_line")] command: String, #[serde(default)] description: Option, }, + #[serde(rename = "Grep", alias = "grep")] Grep { pattern: String, #[serde(default)] @@ -1003,19 +1132,28 @@ pub enum ClaudeToolData { ExitPlanMode { plan: String, }, + #[serde(rename = "Edit", alias = "edit_file")] Edit { + #[serde(alias = "path")] file_path: String, + #[serde(alias = "old_str")] old_string: Option, + #[serde(alias = "new_str")] new_string: Option, }, + #[serde(rename = "MultiEdit", alias = "multi_edit")] MultiEdit { + #[serde(alias = "path")] file_path: String, edits: Vec, }, + #[serde(rename = "Write", alias = "create_file", alias = "write_file")] Write { + #[serde(alias = "path")] file_path: String, content: String, }, + #[serde(rename = "NotebookEdit", alias = "notebook_edit")] NotebookEdit { notebook_path: String, new_source: String, @@ -1023,14 +1161,54 @@ pub enum ClaudeToolData { #[serde(default)] cell_id: Option, }, + #[serde(rename = "WebFetch", alias = "read_web_page")] WebFetch { url: String, #[serde(default)] prompt: Option, }, + #[serde(rename = "WebSearch", alias = "web_search")] WebSearch { query: String, + #[serde(default)] + num_results: Option, }, + // Amp-only utilities for better UX + #[serde(rename = "Oracle", alias = "oracle")] + Oracle { + #[serde(default)] + task: Option, + #[serde(default)] + files: Option>, + #[serde(default)] + context: Option, + }, + #[serde(rename = "Mermaid", alias = "mermaid")] + Mermaid { + code: String, + }, + #[serde(rename = "CodebaseSearchAgent", alias = "codebase_search_agent")] + CodebaseSearchAgent { + #[serde(default)] + query: Option, + #[serde(default)] + path: Option, + #[serde(default)] + include: Option>, + #[serde(default)] + exclude: Option>, + #[serde(default)] + limit: Option, + }, + #[serde(rename = "UndoEdit", alias = "undo_edit")] + UndoEdit { + #[serde(default, alias = "file_path")] + path: Option, + #[serde(default)] + steps: Option, + }, + #[serde(rename = "TodoRead", alias = "todo_read")] + TodoRead {}, #[serde(untagged)] Unknown { #[serde(flatten)] @@ -1050,6 +1228,17 @@ struct ClaudeToolWithInput { input: serde_json::Value, } +// Amp's claude-compatible Bash tool_result content format +// Example content (often delivered as a JSON string): +// {"output":"...","exitCode":0} +#[derive(Deserialize, Serialize, Debug, Clone, PartialEq)] +struct AmpBashResult { + #[serde(default)] + output: String, + #[serde(rename = "exitCode")] + exit_code: i32, +} + #[derive(Debug, Clone)] struct ClaudeToolCallInfo { entry_index: usize, @@ -1091,6 +1280,11 @@ impl ClaudeToolData { ClaudeToolData::NotebookEdit { .. } => "NotebookEdit", ClaudeToolData::WebFetch { .. } => "WebFetch", ClaudeToolData::WebSearch { .. } => "WebSearch", + ClaudeToolData::TodoRead { .. } => "TodoRead", + ClaudeToolData::Oracle { .. } => "Oracle", + ClaudeToolData::Mermaid { .. } => "Mermaid", + ClaudeToolData::CodebaseSearchAgent { .. } => "CodebaseSearchAgent", + ClaudeToolData::UndoEdit { .. } => "UndoEdit", ClaudeToolData::Unknown { data } => data .get("name") .and_then(|v| v.as_str()) @@ -1192,6 +1386,7 @@ mod tests { let glob_data = ClaudeToolData::Glob { pattern: "**/*.ts".to_string(), path: Some("/tmp/test-worktree/src".to_string()), + limit: None, }; let action_type = ClaudeLogProcessor::extract_action_type(&glob_data, "/tmp/test-worktree"); @@ -1210,6 +1405,7 @@ mod tests { let glob_data = ClaudeToolData::Glob { pattern: "*.js".to_string(), path: None, + limit: None, }; let action_type = ClaudeLogProcessor::extract_action_type(&glob_data, "/tmp/test-worktree"); @@ -1311,6 +1507,188 @@ mod tests { ); } + #[test] + fn test_amp_tool_aliases_create_file_and_edit_file() { + // Amp "create_file" should deserialize into Write with alias field "path" + let assistant_with_create = r#"{ + "type":"assistant", + "message":{ + "role":"assistant", + "content":[ + {"type":"tool_use","id":"t1","name":"create_file","input":{"path":"/tmp/work/src/new.txt","content":"hello"}} + ] + } + }"#; + let parsed: ClaudeJson = serde_json::from_str(assistant_with_create).unwrap(); + let entries = ClaudeLogProcessor::new().to_normalized_entries(&parsed, "/tmp/work"); + assert_eq!(entries.len(), 1); + match &entries[0].entry_type { + NormalizedEntryType::ToolUse { action_type, .. } => match action_type { + ActionType::FileEdit { path, .. } => assert_eq!(path, "src/new.txt"), + other => panic!("Expected FileEdit, got {:?}", other), + }, + other => panic!("Expected ToolUse, got {:?}", other), + } + + // Amp "edit_file" should deserialize into Edit with aliases for path/old_str/new_str + let assistant_with_edit = r#"{ + "type":"assistant", + "message":{ + "role":"assistant", + "content":[ + {"type":"tool_use","id":"t2","name":"edit_file","input":{"path":"/tmp/work/README.md","old_str":"foo","new_str":"bar"}} + ] + } + }"#; + let parsed_edit: ClaudeJson = serde_json::from_str(assistant_with_edit).unwrap(); + let entries = ClaudeLogProcessor::new().to_normalized_entries(&parsed_edit, "/tmp/work"); + assert_eq!(entries.len(), 1); + match &entries[0].entry_type { + NormalizedEntryType::ToolUse { action_type, .. } => match action_type { + ActionType::FileEdit { path, .. } => assert_eq!(path, "README.md"), + other => panic!("Expected FileEdit, got {:?}", other), + }, + other => panic!("Expected ToolUse, got {:?}", other), + } + } + + #[test] + fn test_amp_tool_aliases_oracle_mermaid_codebase_undo() { + // Oracle with task + let oracle_json = r#"{ + "type":"assistant", + "message":{ + "role":"assistant", + "content":[ + {"type":"tool_use","id":"t1","name":"oracle","input":{"task":"Assess project status"}} + ] + } + }"#; + let parsed: ClaudeJson = serde_json::from_str(oracle_json).unwrap(); + let entries = ClaudeLogProcessor::new().to_normalized_entries(&parsed, "/tmp/work"); + assert_eq!(entries.len(), 1); + assert_eq!(entries[0].content, "Oracle: `Assess project status`"); + + // Mermaid with code + let mermaid_json = r#"{ + "type":"assistant", + "message":{ + "role":"assistant", + "content":[ + {"type":"tool_use","id":"t2","name":"mermaid","input":{"code":"graph TD; A-->B;"}} + ] + } + }"#; + let parsed: ClaudeJson = serde_json::from_str(mermaid_json).unwrap(); + let entries = ClaudeLogProcessor::new().to_normalized_entries(&parsed, "/tmp/work"); + assert_eq!(entries.len(), 1); + assert_eq!(entries[0].content, "Mermaid diagram"); + + // CodebaseSearchAgent with query + let csa_json = r#"{ + "type":"assistant", + "message":{ + "role":"assistant", + "content":[ + {"type":"tool_use","id":"t3","name":"codebase_search_agent","input":{"query":"TODO markers"}} + ] + } + }"#; + let parsed: ClaudeJson = serde_json::from_str(csa_json).unwrap(); + let entries = ClaudeLogProcessor::new().to_normalized_entries(&parsed, "/tmp/work"); + assert_eq!(entries.len(), 1); + assert_eq!(entries[0].content, "Codebase search: `TODO markers`"); + + // UndoEdit shows file path when available + let undo_json = r#"{ + "type":"assistant", + "message":{ + "role":"assistant", + "content":[ + {"type":"tool_use","id":"t4","name":"undo_edit","input":{"path":"README.md"}} + ] + } + }"#; + let parsed: ClaudeJson = serde_json::from_str(undo_json).unwrap(); + let entries = ClaudeLogProcessor::new().to_normalized_entries(&parsed, "/tmp/work"); + assert_eq!(entries.len(), 1); + assert_eq!(entries[0].content, "Undo edit: `README.md`"); + } + + #[test] + fn test_amp_bash_and_task_content() { + // Bash with alias field cmd + let bash_json = r#"{ + "type":"assistant", + "message":{ + "role":"assistant", + "content":[ + {"type":"tool_use","id":"t1","name":"bash","input":{"cmd":"echo hello"}} + ] + } + }"#; + let parsed: ClaudeJson = serde_json::from_str(bash_json).unwrap(); + let entries = ClaudeLogProcessor::new().to_normalized_entries(&parsed, "/tmp/work"); + assert_eq!(entries.len(), 1); + // Content should display the command in backticks + assert_eq!(entries[0].content, "`echo hello`"); + + // Task content should include description/prompt wrapped in backticks + let task_json = r#"{ + "type":"assistant", + "message":{ + "role":"assistant", + "content":[ + {"type":"tool_use","id":"t2","name":"task","input":{"subagent_type":"Task","prompt":"Add header to README"}} + ] + } + }"#; + let parsed: ClaudeJson = serde_json::from_str(task_json).unwrap(); + let entries = ClaudeLogProcessor::new().to_normalized_entries(&parsed, "/tmp/work"); + assert_eq!(entries.len(), 1); + assert_eq!(entries[0].content, "Task: `Add header to README`"); + } + + #[test] + fn test_task_description_or_prompt_backticks() { + // When description present, use it + let with_desc = r#"{ + "type":"assistant", + "message":{ + "role":"assistant", + "content":[ + {"type":"tool_use","id":"t3","name":"Task","input":{ + "subagent_type":"Task", + "prompt":"Fallback prompt", + "description":"Primary description" + }} + ] + } + }"#; + let parsed: ClaudeJson = serde_json::from_str(with_desc).unwrap(); + let entries = ClaudeLogProcessor::new().to_normalized_entries(&parsed, "/tmp/work"); + assert_eq!(entries.len(), 1); + assert_eq!(entries[0].content, "Task: `Primary description`"); + + // When description missing, fall back to prompt + let no_desc = r#"{ + "type":"assistant", + "message":{ + "role":"assistant", + "content":[ + {"type":"tool_use","id":"t4","name":"Task","input":{ + "subagent_type":"Task", + "prompt":"Only prompt" + }} + ] + } + }"#; + let parsed: ClaudeJson = serde_json::from_str(no_desc).unwrap(); + let entries = ClaudeLogProcessor::new().to_normalized_entries(&parsed, "/tmp/work"); + assert_eq!(entries.len(), 1); + assert_eq!(entries[0].content, "Task: `Only prompt`"); + } + #[test] fn test_tool_result_parsing_ignored() { let tool_result_json = r#"{"type":"tool_result","result":"File content here","is_error":false,"session_id":"test123"}"#; diff --git a/crates/executors/src/profile.rs b/crates/executors/src/profile.rs index 4079813c..a66c713b 100644 --- a/crates/executors/src/profile.rs +++ b/crates/executors/src/profile.rs @@ -207,8 +207,9 @@ mod tests { assert!(claude_code_router_command.contains("--dangerously-skip-permissions")); let amp_command = get_profile_command("amp"); - assert!(amp_command.contains("npx -y @sourcegraph/amp@0.0.1752148945-gd8844f")); - assert!(amp_command.contains("--format=jsonl")); + assert!(amp_command.contains("npx -y @sourcegraph/amp@latest")); + assert!(amp_command.contains("--execute")); + assert!(amp_command.contains("--stream-json")); let gemini_command = get_profile_command("gemini"); assert!(gemini_command.contains("npx -y @google/gemini-cli@latest")); diff --git a/frontend/src/components/NormalizedConversation/DisplayConversationEntry.tsx b/frontend/src/components/NormalizedConversation/DisplayConversationEntry.tsx index 0469db9b..58bf8f32 100644 --- a/frontend/src/components/NormalizedConversation/DisplayConversationEntry.tsx +++ b/frontend/src/components/NormalizedConversation/DisplayConversationEntry.tsx @@ -22,7 +22,7 @@ import { } from 'shared/types.ts'; import FileChangeRenderer from './FileChangeRenderer'; import ToolDetails from './ToolDetails'; -import { Braces, FileText, MoreHorizontal, Dot } from 'lucide-react'; +import { Braces, FileText, MoreHorizontal } from 'lucide-react'; type Props = { entry: NormalizedEntry; @@ -274,7 +274,7 @@ function DisplayConversationEntry({ entry, expansionKey }: Props) { {typeof commandSuccess === 'boolean' && ( {typeof commandSuccess === 'boolean' && ( - + title={ + typeof commandExitCode === 'number' + ? `exit code: ${commandExitCode}` + : commandSuccess + ? 'success' + : 'failed' + } + > + {typeof commandExitCode === 'number' + ? `exit ${commandExitCode}` + : commandSuccess + ? 'ok' + : 'fail'} + )} {commandOutput && ( {' '} - + {commandExit.type === 'exit_code' ? `exit ${commandExit.code}` : commandExit.success