Add QA mode for automated testing with mock executor and hardcoded repos (Vibe Kanban) (#1940)
* The QA mode implementation is complete. Here's a summary of what was implemented:
## Summary
I've successfully implemented the QA mode feature for the vibe-kanban application. Here's what was created/modified:
### New Files
1. **`crates/services/src/services/qa_repos.rs`** - QA repos module that:
- Defines two hardcoded QA repo URLs (`internal-qa-1` and `internal-qa-2`)
- Clones repos to a persistent temp directory on first access using `git2`
- Returns `DirectoryEntry` compatible with existing API
2. **`crates/executors/src/executors/qa_mock.rs`** - Mock executor that:
- Implements `StandardCodingAgentExecutor` trait
- Performs random file operations (create, delete, modify)
- Streams 10 mock log entries in ClaudeJson format over 10 seconds
- Reuses Claude's log processor for normalization
### Modified Files
1. **Cargo.toml files** - Added `qa-mode` feature flags to:
- `crates/executors/Cargo.toml`
- `crates/services/Cargo.toml`
- `crates/server/Cargo.toml`
2. **`crates/executors/src/executors/mod.rs`** - Added:
- Conditional `qa_mock` module export
- `QaMock` variant to `CodingAgent` enum
- Updated `capabilities()` match arm
3. **`crates/executors/src/mcp_config.rs`** - Added `QaMock` handling
4. **`crates/services/src/services/filesystem.rs`** - Override `list_git_repos` and `list_common_git_repos` in QA mode
5. **`crates/services/src/services/container.rs`** - Override `normalize_logs` executor lookup in QA mode
6. **`crates/executors/src/actions/coding_agent_initial.rs`** - Override spawn in QA mode
7. **`crates/executors/src/actions/coding_agent_follow_up.rs`** - Override spawn in QA mode
8. **`vibe-kanban/package.json`** - Added QA build scripts:
- `backend:dev:qa`
- `backend:dev:watch:qa`
- `dev:qa`
### Usage
```bash
# Run backend in QA mode
cargo run --features qa-mode
# Or with pnpm
pnpm run dev:qa
```
When QA mode is enabled:
- `GitPanelCreate` will show only `internal-qa-1` and `internal-qa-2` as recent repos
- Starting any task will perform mock file operations and stream 10 logs over 10 seconds
- Logs follow the ClaudeJson format for compatibility with the existing log processor
* Cleanup script changes for workspace 15499061-ec87-4c1c-81b9-994e84d0e75b
* The TLS error fix is complete. I've replaced the `git2::Repository::clone` call with the `git` CLI approach in `qa_repos.rs`:
**Changes made:**
- Updated `clone_qa_repos_if_needed` function to use `Command::new("git")` with `["clone", "--depth", "1", url, path]` arguments instead of `git2::Repository::clone`
- Added `--depth 1` for shallow cloning (faster)
- Updated error handling to capture stderr output from the git command
The fix uses the git CLI which has reliable TLS support, avoiding the TLS configuration issues that git2 can have on some systems. This matches patterns used elsewhere in the codebase (e.g., `crates/remote/src/github_app/service.rs`).
You can now run `pnpm run backend:dev:qa` or `pnpm run dev:qa` and the QA repos should clone successfully.
* Cleanup script changes for workspace 15499061-ec87-4c1c-81b9-994e84d0e75b
* Done! I've made the mock logs strongly typed by:
1. **Added imports** for `ClaudeJson`, `ClaudeMessage`, `ClaudeContentItem`, and `ClaudeToolData` from the Claude executor
2. **Replaced `generate_mock_logs()`** to use strongly-typed structs:
- Each log entry is now a proper `ClaudeJson` enum variant
- Uses `ClaudeMessage` with proper fields (`id`, `message_type`, `role`, `model`, `content`, `stop_reason`)
- Uses `ClaudeContentItem` variants (`Thinking`, `ToolUse`, `ToolResult`, `Text`)
- Uses `ClaudeToolData` variants (`Read`, `Write`, `Bash`)
- Serializes to JSON using `serde_json::to_string()` for correct escaping
3. **Updated tests** to verify the logs deserialize back to `ClaudeJson` types
This ensures:
- All fields match what `ClaudeLogProcessor::process_logs()` expects
- No manual JSON escaping needed (serde handles it)
- Compile-time type checking prevents malformed logs
- Session IDs are consistent across all entries
* Cleanup script changes for workspace 15499061-ec87-4c1c-81b9-994e84d0e75b
* Done! I've fixed the log display issue by replacing the shell echo approach with a temp file approach:
**Changes made to `crates/executors/src/executors/qa_mock.rs`:**
1. **Write logs to a temp file** instead of embedding them in a shell script:
- Creates a temp file at `$TMPDIR/qa_mock_logs_{uuid}.jsonl`
- Writes all logs as newline-delimited JSON
2. **Use `IFS= read -r` to read the file** - this preserves exact content:
- `IFS=` prevents word splitting
- `-r` prevents backslash interpretation
- `echo "$line"` outputs the exact line content
3. **Clean up the temp file** after reading with `rm -f`
**Benefits:**
- No shell escaping needed - file contents are read verbatim
- Each line is output exactly as serialized by serde_json
- Avoids issues with single quotes, backslashes, and escape sequences
- Temp file is automatically cleaned up after use
This should fix the issue where logs were appearing as raw JSON or with corrupted content due to shell escaping issues.
* fix formatting
* Build succeeds. I've applied the same fix to the two remaining locations that had newlines in their content:
1. **Line 234** - README tool result: Changed `\n\n` to `\\n\\n`
2. **Line 302** - Bash tool result: Changed `\n` to `\\n`
This matches the pattern the user established in the assistant final message (line 318) where `\\n` is used instead of `\n` to ensure newlines are properly preserved through the serialization and display pipeline.
* Cleanup script changes for workspace 15499061-ec87-4c1c-81b9-994e84d0e75b
* simplify scripts
* update agents.md
This commit is contained in:
committed by
GitHub
parent
7ba1867a8f
commit
7de87e9b3a
@@ -29,6 +29,9 @@ Do not manually edit shared/types.ts, instead edit crates/server/src/bin/generat
|
|||||||
- Prepare SQLx (remote package, postgres): `pnpm run remote:prepare-db`
|
- Prepare SQLx (remote package, postgres): `pnpm run remote:prepare-db`
|
||||||
- Local NPX build: `pnpm run build:npx` then `pnpm pack` in `npx-cli/`
|
- Local NPX build: `pnpm run build:npx` then `pnpm pack` in `npx-cli/`
|
||||||
|
|
||||||
|
## Automated QA
|
||||||
|
- When testing changes by runnign the application, you should prefer `pnpm run dev:qa` over `pnpm run dev`, which starts the application in a dedicated mode that is optimised for QA testing
|
||||||
|
|
||||||
## Coding Style & Naming Conventions
|
## Coding Style & Naming Conventions
|
||||||
- Rust: `rustfmt` enforced (`rustfmt.toml`); group imports by crate; snake_case modules, PascalCase types.
|
- Rust: `rustfmt` enforced (`rustfmt.toml`); group imports by crate; snake_case modules, PascalCase types.
|
||||||
- TypeScript/React: ESLint + Prettier (2 spaces, single quotes, 80 cols). PascalCase components, camelCase vars/functions, kebab-case file names where practical.
|
- TypeScript/React: ESLint + Prettier (2 spaces, single quotes, 80 cols). PascalCase components, camelCase vars/functions, kebab-case file names where practical.
|
||||||
|
|||||||
2
Cargo.lock
generated
2
Cargo.lock
generated
@@ -1800,6 +1800,7 @@ dependencies = [
|
|||||||
"json-patch",
|
"json-patch",
|
||||||
"mcp-types",
|
"mcp-types",
|
||||||
"os_pipe",
|
"os_pipe",
|
||||||
|
"rand 0.8.5",
|
||||||
"regex",
|
"regex",
|
||||||
"reqwest",
|
"reqwest",
|
||||||
"rustls",
|
"rustls",
|
||||||
@@ -1821,6 +1822,7 @@ dependencies = [
|
|||||||
"ts-rs 11.0.1",
|
"ts-rs 11.0.1",
|
||||||
"utils",
|
"utils",
|
||||||
"uuid",
|
"uuid",
|
||||||
|
"walkdir",
|
||||||
"winsplit",
|
"winsplit",
|
||||||
"xdg",
|
"xdg",
|
||||||
]
|
]
|
||||||
|
|||||||
@@ -47,6 +47,12 @@ derivative = "2.2.0"
|
|||||||
reqwest = { workspace = true }
|
reqwest = { workspace = true }
|
||||||
rustls = { workspace = true }
|
rustls = { workspace = true }
|
||||||
eventsource-stream = "0.2"
|
eventsource-stream = "0.2"
|
||||||
|
walkdir = "2"
|
||||||
|
rand = "0.8"
|
||||||
|
|
||||||
[target.'cfg(windows)'.dependencies]
|
[target.'cfg(windows)'.dependencies]
|
||||||
winsplit = "0.1.0"
|
winsplit = "0.1.0"
|
||||||
|
|
||||||
|
[features]
|
||||||
|
default = []
|
||||||
|
qa-mode = []
|
||||||
|
|||||||
@@ -54,17 +54,29 @@ impl Executable for CodingAgentFollowUpRequest {
|
|||||||
) -> Result<SpawnedChild, ExecutorError> {
|
) -> Result<SpawnedChild, ExecutorError> {
|
||||||
let effective_dir = self.effective_dir(current_dir);
|
let effective_dir = self.effective_dir(current_dir);
|
||||||
|
|
||||||
let executor_profile_id = self.get_executor_profile_id();
|
#[cfg(feature = "qa-mode")]
|
||||||
let mut agent = ExecutorConfigs::get_cached()
|
{
|
||||||
.get_coding_agent(&executor_profile_id)
|
tracing::info!("QA mode: using mock executor for follow-up instead of real agent");
|
||||||
.ok_or(ExecutorError::UnknownExecutorType(
|
let executor = crate::executors::qa_mock::QaMockExecutor;
|
||||||
executor_profile_id.to_string(),
|
return executor
|
||||||
))?;
|
.spawn_follow_up(&effective_dir, &self.prompt, &self.session_id, env)
|
||||||
|
.await;
|
||||||
|
}
|
||||||
|
|
||||||
agent.use_approvals(approvals.clone());
|
#[cfg(not(feature = "qa-mode"))]
|
||||||
|
{
|
||||||
|
let executor_profile_id = self.get_executor_profile_id();
|
||||||
|
let mut agent = ExecutorConfigs::get_cached()
|
||||||
|
.get_coding_agent(&executor_profile_id)
|
||||||
|
.ok_or(ExecutorError::UnknownExecutorType(
|
||||||
|
executor_profile_id.to_string(),
|
||||||
|
))?;
|
||||||
|
|
||||||
agent
|
agent.use_approvals(approvals.clone());
|
||||||
.spawn_follow_up(&effective_dir, &self.prompt, &self.session_id, env)
|
|
||||||
.await
|
agent
|
||||||
|
.spawn_follow_up(&effective_dir, &self.prompt, &self.session_id, env)
|
||||||
|
.await
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -48,15 +48,25 @@ impl Executable for CodingAgentInitialRequest {
|
|||||||
) -> Result<SpawnedChild, ExecutorError> {
|
) -> Result<SpawnedChild, ExecutorError> {
|
||||||
let effective_dir = self.effective_dir(current_dir);
|
let effective_dir = self.effective_dir(current_dir);
|
||||||
|
|
||||||
let executor_profile_id = self.executor_profile_id.clone();
|
#[cfg(feature = "qa-mode")]
|
||||||
let mut agent = ExecutorConfigs::get_cached()
|
{
|
||||||
.get_coding_agent(&executor_profile_id)
|
tracing::info!("QA mode: using mock executor instead of real agent");
|
||||||
.ok_or(ExecutorError::UnknownExecutorType(
|
let executor = crate::executors::qa_mock::QaMockExecutor;
|
||||||
executor_profile_id.to_string(),
|
return executor.spawn(&effective_dir, &self.prompt, env).await;
|
||||||
))?;
|
}
|
||||||
|
|
||||||
agent.use_approvals(approvals.clone());
|
#[cfg(not(feature = "qa-mode"))]
|
||||||
|
{
|
||||||
|
let executor_profile_id = self.executor_profile_id.clone();
|
||||||
|
let mut agent = ExecutorConfigs::get_cached()
|
||||||
|
.get_coding_agent(&executor_profile_id)
|
||||||
|
.ok_or(ExecutorError::UnknownExecutorType(
|
||||||
|
executor_profile_id.to_string(),
|
||||||
|
))?;
|
||||||
|
|
||||||
agent.spawn(&effective_dir, &self.prompt, env).await
|
agent.use_approvals(approvals.clone());
|
||||||
|
|
||||||
|
agent.spawn(&effective_dir, &self.prompt, env).await
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -12,6 +12,8 @@ use thiserror::Error;
|
|||||||
use ts_rs::TS;
|
use ts_rs::TS;
|
||||||
use workspace_utils::msg_store::MsgStore;
|
use workspace_utils::msg_store::MsgStore;
|
||||||
|
|
||||||
|
#[cfg(feature = "qa-mode")]
|
||||||
|
use crate::executors::qa_mock::QaMockExecutor;
|
||||||
use crate::{
|
use crate::{
|
||||||
actions::ExecutorAction,
|
actions::ExecutorAction,
|
||||||
approvals::ExecutorApprovalService,
|
approvals::ExecutorApprovalService,
|
||||||
@@ -33,6 +35,8 @@ pub mod cursor;
|
|||||||
pub mod droid;
|
pub mod droid;
|
||||||
pub mod gemini;
|
pub mod gemini;
|
||||||
pub mod opencode;
|
pub mod opencode;
|
||||||
|
#[cfg(feature = "qa-mode")]
|
||||||
|
pub mod qa_mock;
|
||||||
pub mod qwen;
|
pub mod qwen;
|
||||||
|
|
||||||
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, TS)]
|
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq, TS)]
|
||||||
@@ -100,6 +104,8 @@ pub enum CodingAgent {
|
|||||||
QwenCode,
|
QwenCode,
|
||||||
Copilot,
|
Copilot,
|
||||||
Droid,
|
Droid,
|
||||||
|
#[cfg(feature = "qa-mode")]
|
||||||
|
QaMock(QaMockExecutor),
|
||||||
}
|
}
|
||||||
|
|
||||||
impl CodingAgent {
|
impl CodingAgent {
|
||||||
@@ -167,6 +173,8 @@ impl CodingAgent {
|
|||||||
],
|
],
|
||||||
Self::CursorAgent(_) => vec![BaseAgentCapability::SetupHelper],
|
Self::CursorAgent(_) => vec![BaseAgentCapability::SetupHelper],
|
||||||
Self::Copilot(_) => vec![],
|
Self::Copilot(_) => vec![],
|
||||||
|
#[cfg(feature = "qa-mode")]
|
||||||
|
Self::QaMock(_) => vec![], // QA mock doesn't need special capabilities
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
402
crates/executors/src/executors/qa_mock.rs
Normal file
402
crates/executors/src/executors/qa_mock.rs
Normal file
@@ -0,0 +1,402 @@
|
|||||||
|
//! QA Mode: Mock executor for testing
|
||||||
|
//!
|
||||||
|
//! This module provides a mock executor that:
|
||||||
|
//! 1. Performs random file operations (create, delete, modify)
|
||||||
|
//! 2. Streams 10 mock log entries over 10 seconds
|
||||||
|
//! 3. Outputs logs in ClaudeJson format for compatibility with existing log normalization
|
||||||
|
|
||||||
|
use std::{path::Path, process::Stdio, sync::Arc};
|
||||||
|
|
||||||
|
use async_trait::async_trait;
|
||||||
|
use command_group::AsyncCommandGroup;
|
||||||
|
use rand::seq::SliceRandom as _;
|
||||||
|
use schemars::JsonSchema;
|
||||||
|
use serde::{Deserialize, Serialize};
|
||||||
|
use tracing::{info, warn};
|
||||||
|
use ts_rs::TS;
|
||||||
|
use workspace_utils::msg_store::MsgStore;
|
||||||
|
|
||||||
|
use crate::{
|
||||||
|
env::ExecutionEnv,
|
||||||
|
executors::{
|
||||||
|
ExecutorError, SpawnedChild, StandardCodingAgentExecutor,
|
||||||
|
claude::{ClaudeContentItem, ClaudeJson, ClaudeMessage, ClaudeToolData},
|
||||||
|
},
|
||||||
|
logs::utils::EntryIndexProvider,
|
||||||
|
};
|
||||||
|
|
||||||
|
/// Mock executor for QA testing
|
||||||
|
#[derive(Debug, Clone, Serialize, Deserialize, Default, PartialEq, TS, JsonSchema)]
|
||||||
|
pub struct QaMockExecutor;
|
||||||
|
|
||||||
|
#[async_trait]
|
||||||
|
impl StandardCodingAgentExecutor for QaMockExecutor {
|
||||||
|
async fn spawn(
|
||||||
|
&self,
|
||||||
|
current_dir: &Path,
|
||||||
|
prompt: &str,
|
||||||
|
_env: &ExecutionEnv,
|
||||||
|
) -> Result<SpawnedChild, ExecutorError> {
|
||||||
|
info!("QA Mock Executor: spawning mock execution");
|
||||||
|
|
||||||
|
// 1. Perform file operations before spawning the log output process
|
||||||
|
perform_file_operations(current_dir).await;
|
||||||
|
|
||||||
|
// 2. Generate mock logs and write to temp file to avoid shell escaping issues
|
||||||
|
let logs = generate_mock_logs(prompt);
|
||||||
|
let temp_dir = std::env::temp_dir();
|
||||||
|
let log_file = temp_dir.join(format!("qa_mock_logs_{}.jsonl", uuid::Uuid::new_v4()));
|
||||||
|
|
||||||
|
// Write all logs to file, one per line
|
||||||
|
let content = logs.join("\n") + "\n";
|
||||||
|
tokio::fs::write(&log_file, &content)
|
||||||
|
.await
|
||||||
|
.map_err(|e| ExecutorError::Io(std::io::Error::new(std::io::ErrorKind::Other, e)))?;
|
||||||
|
|
||||||
|
// 3. Create shell script that reads file and outputs with delays
|
||||||
|
// Using IFS= read -r to preserve exact content (no word splitting, no backslash interpretation)
|
||||||
|
let script = format!(
|
||||||
|
r#"while IFS= read -r line; do echo "$line"; sleep 1; done < "{}"; rm -f "{}""#,
|
||||||
|
log_file.display(),
|
||||||
|
log_file.display()
|
||||||
|
);
|
||||||
|
|
||||||
|
let mut cmd = tokio::process::Command::new("sh");
|
||||||
|
cmd.arg("-c")
|
||||||
|
.arg(&script)
|
||||||
|
.current_dir(current_dir)
|
||||||
|
.stdout(Stdio::piped())
|
||||||
|
.stderr(Stdio::piped());
|
||||||
|
|
||||||
|
let child = cmd.group_spawn().map_err(ExecutorError::Io)?;
|
||||||
|
Ok(SpawnedChild::from(child))
|
||||||
|
}
|
||||||
|
|
||||||
|
async fn spawn_follow_up(
|
||||||
|
&self,
|
||||||
|
current_dir: &Path,
|
||||||
|
prompt: &str,
|
||||||
|
_session_id: &str,
|
||||||
|
env: &ExecutionEnv,
|
||||||
|
) -> Result<SpawnedChild, ExecutorError> {
|
||||||
|
// QA mode doesn't support real sessions, just spawn fresh
|
||||||
|
info!("QA Mock Executor: follow-up request treated as new spawn");
|
||||||
|
self.spawn(current_dir, prompt, env).await
|
||||||
|
}
|
||||||
|
|
||||||
|
fn normalize_logs(&self, msg_store: Arc<MsgStore>, current_dir: &Path) {
|
||||||
|
// Reuse Claude's log processor since we output ClaudeJson format
|
||||||
|
let entry_index_provider = EntryIndexProvider::start_from(&msg_store);
|
||||||
|
crate::executors::claude::ClaudeLogProcessor::process_logs(
|
||||||
|
msg_store,
|
||||||
|
current_dir,
|
||||||
|
entry_index_provider,
|
||||||
|
crate::executors::claude::HistoryStrategy::Default,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
fn default_mcp_config_path(&self) -> Option<std::path::PathBuf> {
|
||||||
|
None // QA mock doesn't need MCP config
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Perform random file operations in the worktree
|
||||||
|
async fn perform_file_operations(dir: &Path) {
|
||||||
|
info!("QA Mock: performing file operations in {:?}", dir);
|
||||||
|
|
||||||
|
// Create: qa_created_{uuid}.txt
|
||||||
|
let uuid = uuid::Uuid::new_v4();
|
||||||
|
let new_file = dir.join(format!("qa_created_{}.txt", uuid));
|
||||||
|
match tokio::fs::write(&new_file, "QA mode created this file\n").await {
|
||||||
|
Ok(_) => info!("QA Mock: created file {:?}", new_file),
|
||||||
|
Err(e) => warn!("QA Mock: failed to create file: {}", e),
|
||||||
|
}
|
||||||
|
|
||||||
|
// Find files (excluding .git and binary files)
|
||||||
|
let files: Vec<_> = walkdir::WalkDir::new(dir)
|
||||||
|
.max_depth(3) // Limit depth to avoid long walks
|
||||||
|
.into_iter()
|
||||||
|
.filter_map(|e| e.ok())
|
||||||
|
.filter(|e| e.file_type().is_file())
|
||||||
|
.filter(|e| !e.path().to_string_lossy().contains(".git"))
|
||||||
|
.filter(|e| {
|
||||||
|
e.path()
|
||||||
|
.extension()
|
||||||
|
.and_then(|ext| ext.to_str())
|
||||||
|
.is_some_and(|ext| ["rs", "ts", "js", "txt", "md", "json"].contains(&ext))
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
if files.len() >= 2 {
|
||||||
|
// Pick random indices before any await points (thread_rng is not Send)
|
||||||
|
let (remove_idx, modify_idx) = {
|
||||||
|
let mut rng = rand::thread_rng();
|
||||||
|
let mut indices: Vec<usize> = (0..files.len()).collect();
|
||||||
|
indices.shuffle(&mut rng);
|
||||||
|
(indices.first().copied(), indices.get(1).copied())
|
||||||
|
};
|
||||||
|
|
||||||
|
// Remove a random file (first shuffled index)
|
||||||
|
if let Some(idx) = remove_idx {
|
||||||
|
let file_to_remove = files[idx].path().to_path_buf();
|
||||||
|
// Don't remove the file we just created
|
||||||
|
if file_to_remove != new_file {
|
||||||
|
match tokio::fs::remove_file(&file_to_remove).await {
|
||||||
|
Ok(_) => info!("QA Mock: removed file {:?}", file_to_remove),
|
||||||
|
Err(e) => warn!("QA Mock: failed to remove file: {}", e),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Modify a different random file (second shuffled index)
|
||||||
|
if let Some(idx) = modify_idx {
|
||||||
|
let file_to_modify = files[idx].path().to_path_buf();
|
||||||
|
// Don't modify the file we just created
|
||||||
|
if file_to_modify != new_file {
|
||||||
|
match tokio::fs::read_to_string(&file_to_modify).await {
|
||||||
|
Ok(content) => {
|
||||||
|
let modified = format!(
|
||||||
|
"{}\n// QA modification at {}\n",
|
||||||
|
content,
|
||||||
|
chrono::Utc::now().format("%Y-%m-%d %H:%M:%S UTC")
|
||||||
|
);
|
||||||
|
match tokio::fs::write(&file_to_modify, modified).await {
|
||||||
|
Ok(_) => info!("QA Mock: modified file {:?}", file_to_modify),
|
||||||
|
Err(e) => warn!("QA Mock: failed to write modified file: {}", e),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Err(e) => warn!("QA Mock: failed to read file for modification: {}", e),
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
info!(
|
||||||
|
"QA Mock: not enough files found for remove/modify operations (found {})",
|
||||||
|
files.len()
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Generate 10 mock log entries in ClaudeJson format using strongly-typed structs
|
||||||
|
fn generate_mock_logs(prompt: &str) -> Vec<String> {
|
||||||
|
let session_id = uuid::Uuid::new_v4().to_string();
|
||||||
|
|
||||||
|
let logs: Vec<ClaudeJson> = vec![
|
||||||
|
// 1. System init
|
||||||
|
ClaudeJson::System {
|
||||||
|
subtype: Some("init".to_string()),
|
||||||
|
session_id: Some(session_id.clone()),
|
||||||
|
cwd: None,
|
||||||
|
tools: None,
|
||||||
|
model: Some("qa-mock-executor".to_string()),
|
||||||
|
api_key_source: Some("unknown".to_string()),
|
||||||
|
},
|
||||||
|
// 2. Assistant thinking
|
||||||
|
ClaudeJson::Assistant {
|
||||||
|
message: ClaudeMessage {
|
||||||
|
id: Some("msg-qa-1".to_string()),
|
||||||
|
message_type: Some("message".to_string()),
|
||||||
|
role: "assistant".to_string(),
|
||||||
|
model: Some("qa-mock".to_string()),
|
||||||
|
content: vec![ClaudeContentItem::Thinking {
|
||||||
|
thinking: "Analyzing the QA task and preparing mock execution...".to_string(),
|
||||||
|
}],
|
||||||
|
stop_reason: None,
|
||||||
|
},
|
||||||
|
session_id: Some(session_id.clone()),
|
||||||
|
},
|
||||||
|
// 3. Read tool use
|
||||||
|
ClaudeJson::Assistant {
|
||||||
|
message: ClaudeMessage {
|
||||||
|
id: Some("msg-qa-2".to_string()),
|
||||||
|
message_type: Some("message".to_string()),
|
||||||
|
role: "assistant".to_string(),
|
||||||
|
model: Some("qa-mock".to_string()),
|
||||||
|
content: vec![ClaudeContentItem::ToolUse {
|
||||||
|
id: "qa-tool-1".to_string(),
|
||||||
|
tool_data: ClaudeToolData::Read {
|
||||||
|
file_path: "README.md".to_string(),
|
||||||
|
},
|
||||||
|
}],
|
||||||
|
stop_reason: None,
|
||||||
|
},
|
||||||
|
session_id: Some(session_id.clone()),
|
||||||
|
},
|
||||||
|
// 4. Read tool result
|
||||||
|
ClaudeJson::User {
|
||||||
|
message: ClaudeMessage {
|
||||||
|
id: Some("msg-qa-3".to_string()),
|
||||||
|
message_type: Some("message".to_string()),
|
||||||
|
role: "user".to_string(),
|
||||||
|
model: None,
|
||||||
|
content: vec![ClaudeContentItem::ToolResult {
|
||||||
|
tool_use_id: "qa-tool-1".to_string(),
|
||||||
|
content: serde_json::json!(
|
||||||
|
"# Project README\\n\\nThis is a QA test repository."
|
||||||
|
),
|
||||||
|
is_error: Some(false),
|
||||||
|
}],
|
||||||
|
stop_reason: None,
|
||||||
|
},
|
||||||
|
session_id: Some(session_id.clone()),
|
||||||
|
},
|
||||||
|
// 5. Write tool use
|
||||||
|
ClaudeJson::Assistant {
|
||||||
|
message: ClaudeMessage {
|
||||||
|
id: Some("msg-qa-4".to_string()),
|
||||||
|
message_type: Some("message".to_string()),
|
||||||
|
role: "assistant".to_string(),
|
||||||
|
model: Some("qa-mock".to_string()),
|
||||||
|
content: vec![ClaudeContentItem::ToolUse {
|
||||||
|
id: "qa-tool-2".to_string(),
|
||||||
|
tool_data: ClaudeToolData::Write {
|
||||||
|
file_path: "qa_output.txt".to_string(),
|
||||||
|
content: "QA generated content".to_string(),
|
||||||
|
},
|
||||||
|
}],
|
||||||
|
stop_reason: None,
|
||||||
|
},
|
||||||
|
session_id: Some(session_id.clone()),
|
||||||
|
},
|
||||||
|
// 6. Write tool result
|
||||||
|
ClaudeJson::User {
|
||||||
|
message: ClaudeMessage {
|
||||||
|
id: Some("msg-qa-5".to_string()),
|
||||||
|
message_type: Some("message".to_string()),
|
||||||
|
role: "user".to_string(),
|
||||||
|
model: None,
|
||||||
|
content: vec![ClaudeContentItem::ToolResult {
|
||||||
|
tool_use_id: "qa-tool-2".to_string(),
|
||||||
|
content: serde_json::json!("File written successfully"),
|
||||||
|
is_error: Some(false),
|
||||||
|
}],
|
||||||
|
stop_reason: None,
|
||||||
|
},
|
||||||
|
session_id: Some(session_id.clone()),
|
||||||
|
},
|
||||||
|
// 7. Bash tool use
|
||||||
|
ClaudeJson::Assistant {
|
||||||
|
message: ClaudeMessage {
|
||||||
|
id: Some("msg-qa-6".to_string()),
|
||||||
|
message_type: Some("message".to_string()),
|
||||||
|
role: "assistant".to_string(),
|
||||||
|
model: Some("qa-mock".to_string()),
|
||||||
|
content: vec![ClaudeContentItem::ToolUse {
|
||||||
|
id: "qa-tool-3".to_string(),
|
||||||
|
tool_data: ClaudeToolData::Bash {
|
||||||
|
command: "echo 'QA test complete'".to_string(),
|
||||||
|
description: Some("Run QA test command".to_string()),
|
||||||
|
},
|
||||||
|
}],
|
||||||
|
stop_reason: None,
|
||||||
|
},
|
||||||
|
session_id: Some(session_id.clone()),
|
||||||
|
},
|
||||||
|
// 8. Bash tool result
|
||||||
|
ClaudeJson::User {
|
||||||
|
message: ClaudeMessage {
|
||||||
|
id: Some("msg-qa-7".to_string()),
|
||||||
|
message_type: Some("message".to_string()),
|
||||||
|
role: "user".to_string(),
|
||||||
|
model: None,
|
||||||
|
content: vec![ClaudeContentItem::ToolResult {
|
||||||
|
tool_use_id: "qa-tool-3".to_string(),
|
||||||
|
content: serde_json::json!("QA test complete\\n"),
|
||||||
|
is_error: Some(false),
|
||||||
|
}],
|
||||||
|
stop_reason: None,
|
||||||
|
},
|
||||||
|
session_id: Some(session_id.clone()),
|
||||||
|
},
|
||||||
|
// 9. Assistant final message
|
||||||
|
ClaudeJson::Assistant {
|
||||||
|
message: ClaudeMessage {
|
||||||
|
id: Some("msg-qa-8".to_string()),
|
||||||
|
message_type: Some("message".to_string()),
|
||||||
|
role: "assistant".to_string(),
|
||||||
|
model: Some("qa-mock".to_string()),
|
||||||
|
content: vec![ClaudeContentItem::Text {
|
||||||
|
text: format!(
|
||||||
|
"QA mode execution completed successfully.\\n\\nI performed the following operations:\\n1. Read README.md\\n2. Created qa_output.txt\\n3. Ran a test command\\nOriginal prompt: {}",
|
||||||
|
prompt
|
||||||
|
),
|
||||||
|
}],
|
||||||
|
stop_reason: Some("end_turn".to_string()),
|
||||||
|
},
|
||||||
|
session_id: Some(session_id.clone()),
|
||||||
|
},
|
||||||
|
// 10. Result success
|
||||||
|
ClaudeJson::Result {
|
||||||
|
subtype: Some("success".to_string()),
|
||||||
|
is_error: Some(false),
|
||||||
|
duration_ms: Some(10000),
|
||||||
|
result: None,
|
||||||
|
error: None,
|
||||||
|
num_turns: Some(3),
|
||||||
|
session_id: Some(session_id),
|
||||||
|
},
|
||||||
|
];
|
||||||
|
|
||||||
|
// Serialize to JSON strings - this ensures proper escaping
|
||||||
|
logs.into_iter()
|
||||||
|
.map(|log| serde_json::to_string(&log).expect("ClaudeJson should serialize"))
|
||||||
|
.collect()
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_generate_mock_logs_count() {
|
||||||
|
let logs = generate_mock_logs("test prompt");
|
||||||
|
assert_eq!(logs.len(), 10, "Should generate exactly 10 log entries");
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_generate_mock_logs_valid_json() {
|
||||||
|
let logs = generate_mock_logs("test prompt");
|
||||||
|
for (i, log) in logs.iter().enumerate() {
|
||||||
|
let parsed: Result<serde_json::Value, _> = serde_json::from_str(log);
|
||||||
|
assert!(
|
||||||
|
parsed.is_ok(),
|
||||||
|
"Log entry {} should be valid JSON: {}",
|
||||||
|
i,
|
||||||
|
log
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_generate_mock_logs_deserializes_to_claudejson() {
|
||||||
|
let logs = generate_mock_logs("test prompt");
|
||||||
|
for (i, log) in logs.iter().enumerate() {
|
||||||
|
let parsed: Result<ClaudeJson, _> = serde_json::from_str(log);
|
||||||
|
assert!(
|
||||||
|
parsed.is_ok(),
|
||||||
|
"Log entry {} should deserialize to ClaudeJson: {} - error: {:?}",
|
||||||
|
i,
|
||||||
|
log,
|
||||||
|
parsed.err()
|
||||||
|
);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_escape_special_characters() {
|
||||||
|
let logs = generate_mock_logs("test with \"quotes\" and\nnewlines");
|
||||||
|
// The final assistant message (index 8) should contain the prompt
|
||||||
|
let final_log = &logs[8];
|
||||||
|
let parsed: ClaudeJson = serde_json::from_str(final_log).unwrap();
|
||||||
|
|
||||||
|
if let ClaudeJson::Assistant { message, .. } = parsed {
|
||||||
|
if let Some(ClaudeContentItem::Text { text }) = message.content.first() {
|
||||||
|
assert!(text.contains("test with \"quotes\" and\nnewlines"));
|
||||||
|
} else {
|
||||||
|
panic!("Expected Text content item");
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
panic!("Expected Assistant variant");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -299,6 +299,8 @@ impl CodingAgent {
|
|||||||
CodingAgent::Codex(_) => Codex,
|
CodingAgent::Codex(_) => Codex,
|
||||||
CodingAgent::Opencode(_) => Opencode,
|
CodingAgent::Opencode(_) => Opencode,
|
||||||
CodingAgent::Copilot(..) => Copilot,
|
CodingAgent::Copilot(..) => Copilot,
|
||||||
|
#[cfg(feature = "qa-mode")]
|
||||||
|
CodingAgent::QaMock(_) => Passthrough, // QA mock doesn't need MCP
|
||||||
};
|
};
|
||||||
|
|
||||||
let canonical = PRECONFIGURED_MCP_SERVERS.clone();
|
let canonical = PRECONFIGURED_MCP_SERVERS.clone();
|
||||||
|
|||||||
@@ -51,3 +51,7 @@ regex = "1"
|
|||||||
|
|
||||||
[build-dependencies]
|
[build-dependencies]
|
||||||
dotenv = "0.15"
|
dotenv = "0.15"
|
||||||
|
|
||||||
|
[features]
|
||||||
|
default = []
|
||||||
|
qa-mode = ["services/qa-mode", "executors/qa-mode"]
|
||||||
|
|||||||
@@ -6,6 +6,7 @@ edition = "2024"
|
|||||||
[features]
|
[features]
|
||||||
default = []
|
default = []
|
||||||
cloud = []
|
cloud = []
|
||||||
|
qa-mode = ["executors/qa-mode"]
|
||||||
|
|
||||||
[dependencies]
|
[dependencies]
|
||||||
utils = { path = "../utils" }
|
utils = { path = "../utils" }
|
||||||
|
|||||||
@@ -27,6 +27,8 @@ use db::{
|
|||||||
workspace_repo::WorkspaceRepo,
|
workspace_repo::WorkspaceRepo,
|
||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
#[cfg(feature = "qa-mode")]
|
||||||
|
use executors::executors::qa_mock::QaMockExecutor;
|
||||||
use executors::{
|
use executors::{
|
||||||
actions::{
|
actions::{
|
||||||
ExecutorAction, ExecutorActionType,
|
ExecutorAction, ExecutorActionType,
|
||||||
@@ -754,16 +756,42 @@ pub trait ContainerService {
|
|||||||
// Spawn normalizer on populated store
|
// Spawn normalizer on populated store
|
||||||
match executor_action.typ() {
|
match executor_action.typ() {
|
||||||
ExecutorActionType::CodingAgentInitialRequest(request) => {
|
ExecutorActionType::CodingAgentInitialRequest(request) => {
|
||||||
let executor = ExecutorConfigs::get_cached()
|
#[cfg(feature = "qa-mode")]
|
||||||
.get_coding_agent_or_default(&request.executor_profile_id);
|
{
|
||||||
executor
|
let executor = QaMockExecutor;
|
||||||
.normalize_logs(temp_store.clone(), &request.effective_dir(¤t_dir));
|
executor.normalize_logs(
|
||||||
|
temp_store.clone(),
|
||||||
|
&request.effective_dir(¤t_dir),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
#[cfg(not(feature = "qa-mode"))]
|
||||||
|
{
|
||||||
|
let executor = ExecutorConfigs::get_cached()
|
||||||
|
.get_coding_agent_or_default(&request.executor_profile_id);
|
||||||
|
executor.normalize_logs(
|
||||||
|
temp_store.clone(),
|
||||||
|
&request.effective_dir(¤t_dir),
|
||||||
|
);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
ExecutorActionType::CodingAgentFollowUpRequest(request) => {
|
ExecutorActionType::CodingAgentFollowUpRequest(request) => {
|
||||||
let executor = ExecutorConfigs::get_cached()
|
#[cfg(feature = "qa-mode")]
|
||||||
.get_coding_agent_or_default(&request.executor_profile_id);
|
{
|
||||||
executor
|
let executor = QaMockExecutor;
|
||||||
.normalize_logs(temp_store.clone(), &request.effective_dir(¤t_dir));
|
executor.normalize_logs(
|
||||||
|
temp_store.clone(),
|
||||||
|
&request.effective_dir(¤t_dir),
|
||||||
|
);
|
||||||
|
}
|
||||||
|
#[cfg(not(feature = "qa-mode"))]
|
||||||
|
{
|
||||||
|
let executor = ExecutorConfigs::get_cached()
|
||||||
|
.get_coding_agent_or_default(&request.executor_profile_id);
|
||||||
|
executor.normalize_logs(
|
||||||
|
temp_store.clone(),
|
||||||
|
&request.effective_dir(¤t_dir),
|
||||||
|
);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
_ => {
|
_ => {
|
||||||
tracing::debug!(
|
tracing::debug!(
|
||||||
@@ -1118,15 +1146,23 @@ pub trait ContainerService {
|
|||||||
_ => None,
|
_ => None,
|
||||||
}
|
}
|
||||||
{
|
{
|
||||||
if let Some(executor) =
|
#[cfg(feature = "qa-mode")]
|
||||||
ExecutorConfigs::get_cached().get_coding_agent(executor_profile_id)
|
|
||||||
{
|
{
|
||||||
|
let executor = QaMockExecutor;
|
||||||
executor.normalize_logs(msg_store, &working_dir);
|
executor.normalize_logs(msg_store, &working_dir);
|
||||||
} else {
|
}
|
||||||
tracing::error!(
|
#[cfg(not(feature = "qa-mode"))]
|
||||||
"Failed to resolve profile '{:?}' for normalization",
|
{
|
||||||
executor_profile_id
|
if let Some(executor) =
|
||||||
);
|
ExecutorConfigs::get_cached().get_coding_agent(executor_profile_id)
|
||||||
|
{
|
||||||
|
executor.normalize_logs(msg_store, &working_dir);
|
||||||
|
} else {
|
||||||
|
tracing::error!(
|
||||||
|
"Failed to resolve profile '{:?}' for normalization",
|
||||||
|
executor_profile_id
|
||||||
|
);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -93,12 +93,26 @@ impl FilesystemService {
|
|||||||
hard_timeout_ms: u64,
|
hard_timeout_ms: u64,
|
||||||
max_depth: Option<usize>,
|
max_depth: Option<usize>,
|
||||||
) -> Result<Vec<DirectoryEntry>, FilesystemError> {
|
) -> Result<Vec<DirectoryEntry>, FilesystemError> {
|
||||||
let base_path = path
|
#[cfg(feature = "qa-mode")]
|
||||||
.map(PathBuf::from)
|
{
|
||||||
.unwrap_or_else(Self::get_home_directory);
|
tracing::info!("QA mode: returning hardcoded QA repos instead of scanning filesystem");
|
||||||
Self::verify_directory(&base_path)?;
|
return super::qa_repos::get_qa_repos();
|
||||||
self.list_git_repos_with_timeout(vec![base_path], timeout_ms, hard_timeout_ms, max_depth)
|
}
|
||||||
|
|
||||||
|
#[cfg(not(feature = "qa-mode"))]
|
||||||
|
{
|
||||||
|
let base_path = path
|
||||||
|
.map(PathBuf::from)
|
||||||
|
.unwrap_or_else(Self::get_home_directory);
|
||||||
|
Self::verify_directory(&base_path)?;
|
||||||
|
self.list_git_repos_with_timeout(
|
||||||
|
vec![base_path],
|
||||||
|
timeout_ms,
|
||||||
|
hard_timeout_ms,
|
||||||
|
max_depth,
|
||||||
|
)
|
||||||
.await
|
.await
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn list_git_repos_with_timeout(
|
async fn list_git_repos_with_timeout(
|
||||||
@@ -151,22 +165,33 @@ impl FilesystemService {
|
|||||||
hard_timeout_ms: u64,
|
hard_timeout_ms: u64,
|
||||||
max_depth: Option<usize>,
|
max_depth: Option<usize>,
|
||||||
) -> Result<Vec<DirectoryEntry>, FilesystemError> {
|
) -> Result<Vec<DirectoryEntry>, FilesystemError> {
|
||||||
let search_strings = ["repos", "dev", "work", "code", "projects"];
|
#[cfg(feature = "qa-mode")]
|
||||||
let home_dir = Self::get_home_directory();
|
|
||||||
let mut paths: Vec<PathBuf> = search_strings
|
|
||||||
.iter()
|
|
||||||
.map(|s| home_dir.join(s))
|
|
||||||
.filter(|p| p.exists() && p.is_dir())
|
|
||||||
.collect();
|
|
||||||
paths.insert(0, home_dir);
|
|
||||||
if let Some(cwd) = std::env::current_dir().ok()
|
|
||||||
&& cwd.exists()
|
|
||||||
&& cwd.is_dir()
|
|
||||||
{
|
{
|
||||||
paths.insert(0, cwd);
|
tracing::info!(
|
||||||
|
"QA mode: returning hardcoded QA repos instead of scanning common directories"
|
||||||
|
);
|
||||||
|
return super::qa_repos::get_qa_repos();
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(not(feature = "qa-mode"))]
|
||||||
|
{
|
||||||
|
let search_strings = ["repos", "dev", "work", "code", "projects"];
|
||||||
|
let home_dir = Self::get_home_directory();
|
||||||
|
let mut paths: Vec<PathBuf> = search_strings
|
||||||
|
.iter()
|
||||||
|
.map(|s| home_dir.join(s))
|
||||||
|
.filter(|p| p.exists() && p.is_dir())
|
||||||
|
.collect();
|
||||||
|
paths.insert(0, home_dir);
|
||||||
|
if let Some(cwd) = std::env::current_dir().ok()
|
||||||
|
&& cwd.exists()
|
||||||
|
&& cwd.is_dir()
|
||||||
|
{
|
||||||
|
paths.insert(0, cwd);
|
||||||
|
}
|
||||||
|
self.list_git_repos_with_timeout(paths, timeout_ms, hard_timeout_ms, max_depth)
|
||||||
|
.await
|
||||||
}
|
}
|
||||||
self.list_git_repos_with_timeout(paths, timeout_ms, hard_timeout_ms, max_depth)
|
|
||||||
.await
|
|
||||||
}
|
}
|
||||||
|
|
||||||
async fn list_git_repos_inner(
|
async fn list_git_repos_inner(
|
||||||
|
|||||||
@@ -16,6 +16,8 @@ pub mod notification;
|
|||||||
pub mod oauth_credentials;
|
pub mod oauth_credentials;
|
||||||
pub mod pr_monitor;
|
pub mod pr_monitor;
|
||||||
pub mod project;
|
pub mod project;
|
||||||
|
#[cfg(feature = "qa-mode")]
|
||||||
|
pub mod qa_repos;
|
||||||
pub mod queued_message;
|
pub mod queued_message;
|
||||||
pub mod remote_client;
|
pub mod remote_client;
|
||||||
pub mod repo;
|
pub mod repo;
|
||||||
|
|||||||
115
crates/services/src/services/qa_repos.rs
Normal file
115
crates/services/src/services/qa_repos.rs
Normal file
@@ -0,0 +1,115 @@
|
|||||||
|
//! QA Mode: Hardcoded repository management for testing
|
||||||
|
//!
|
||||||
|
//! This module provides two hardcoded QA repositories that are cloned
|
||||||
|
//! to a persistent temp directory and returned as the only "recent" repos.
|
||||||
|
|
||||||
|
use std::{path::PathBuf, process::Command};
|
||||||
|
|
||||||
|
use once_cell::sync::Lazy;
|
||||||
|
use tracing::{info, warn};
|
||||||
|
|
||||||
|
use super::filesystem::{DirectoryEntry, FilesystemError};
|
||||||
|
|
||||||
|
/// QA repository URLs and names
|
||||||
|
const QA_REPOS: &[(&str, &str)] = &[
|
||||||
|
("internal-qa-1", "https://github.com/BloopAI/internal-qa-1"),
|
||||||
|
("internal-qa-2", "https://github.com/BloopAI/internal-qa-2"),
|
||||||
|
];
|
||||||
|
|
||||||
|
/// Persistent directory for QA repos - survives server restarts
|
||||||
|
static QA_REPOS_DIR: Lazy<PathBuf> = Lazy::new(|| {
|
||||||
|
let dir = utils::path::get_vibe_kanban_temp_dir().join("qa-repos");
|
||||||
|
if let Err(e) = std::fs::create_dir_all(&dir) {
|
||||||
|
warn!("Failed to create QA repos directory: {}", e);
|
||||||
|
}
|
||||||
|
info!("QA repos directory: {:?}", dir);
|
||||||
|
dir
|
||||||
|
});
|
||||||
|
|
||||||
|
/// Get the list of QA repositories, cloning them if necessary.
|
||||||
|
///
|
||||||
|
/// This function is called instead of the normal filesystem git repo discovery
|
||||||
|
/// when QA mode is enabled.
|
||||||
|
pub fn get_qa_repos() -> Result<Vec<DirectoryEntry>, FilesystemError> {
|
||||||
|
let base_dir = &*QA_REPOS_DIR;
|
||||||
|
|
||||||
|
// Ensure repos are cloned
|
||||||
|
clone_qa_repos_if_needed(base_dir);
|
||||||
|
|
||||||
|
// Build DirectoryEntry for each repo
|
||||||
|
let entries = QA_REPOS
|
||||||
|
.iter()
|
||||||
|
.filter_map(|(name, _url)| {
|
||||||
|
let repo_path = base_dir.join(name);
|
||||||
|
if repo_path.exists() && repo_path.join(".git").exists() {
|
||||||
|
let last_modified = std::fs::metadata(&repo_path)
|
||||||
|
.ok()
|
||||||
|
.and_then(|m| m.modified().ok())
|
||||||
|
.map(|t| t.elapsed().unwrap_or_default().as_secs());
|
||||||
|
|
||||||
|
Some(DirectoryEntry {
|
||||||
|
name: name.to_string(),
|
||||||
|
path: repo_path,
|
||||||
|
is_directory: true,
|
||||||
|
is_git_repo: true,
|
||||||
|
last_modified,
|
||||||
|
})
|
||||||
|
} else {
|
||||||
|
warn!("QA repo {} not found at {:?}", name, repo_path);
|
||||||
|
None
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
|
Ok(entries)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Clone QA repositories if they don't already exist
|
||||||
|
fn clone_qa_repos_if_needed(base_dir: &PathBuf) {
|
||||||
|
for (name, url) in QA_REPOS {
|
||||||
|
let repo_path = base_dir.join(name);
|
||||||
|
|
||||||
|
if repo_path.join(".git").exists() {
|
||||||
|
info!("QA repo {} already exists at {:?}", name, repo_path);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
|
||||||
|
info!("Cloning QA repo {} from {} to {:?}", name, url, repo_path);
|
||||||
|
|
||||||
|
// Use git CLI for reliable TLS support (git2 has TLS issues)
|
||||||
|
let output = Command::new("git")
|
||||||
|
.args(["clone", "--depth", "1", url, &repo_path.to_string_lossy()])
|
||||||
|
.output();
|
||||||
|
|
||||||
|
match output {
|
||||||
|
Ok(result) if result.status.success() => {
|
||||||
|
info!("Successfully cloned QA repo {}", name);
|
||||||
|
}
|
||||||
|
Ok(result) => {
|
||||||
|
warn!(
|
||||||
|
"Failed to clone QA repo {}: {}",
|
||||||
|
name,
|
||||||
|
String::from_utf8_lossy(&result.stderr)
|
||||||
|
);
|
||||||
|
// Try to clean up partial clone
|
||||||
|
let _ = std::fs::remove_dir_all(&repo_path);
|
||||||
|
}
|
||||||
|
Err(e) => {
|
||||||
|
warn!("Failed to run git clone for {}: {}", name, e);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cfg(test)]
|
||||||
|
mod tests {
|
||||||
|
use super::*;
|
||||||
|
|
||||||
|
#[test]
|
||||||
|
fn test_qa_repos_dir_is_persistent() {
|
||||||
|
let dir1 = &*QA_REPOS_DIR;
|
||||||
|
let dir2 = &*QA_REPOS_DIR;
|
||||||
|
assert_eq!(dir1, dir2);
|
||||||
|
assert!(dir1.ends_with("qa-repos"));
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -22,6 +22,7 @@
|
|||||||
"backend:dev": "BACKEND_PORT=$(node scripts/setup-dev-environment.js backend) npm run backend:dev:watch",
|
"backend:dev": "BACKEND_PORT=$(node scripts/setup-dev-environment.js backend) npm run backend:dev:watch",
|
||||||
"backend:check": "cargo check",
|
"backend:check": "cargo check",
|
||||||
"backend:dev:watch": "DISABLE_WORKTREE_ORPHAN_CLEANUP=1 RUST_LOG=debug cargo watch -w crates -x 'run --bin server'",
|
"backend:dev:watch": "DISABLE_WORKTREE_ORPHAN_CLEANUP=1 RUST_LOG=debug cargo watch -w crates -x 'run --bin server'",
|
||||||
|
"dev:qa": "export FRONTEND_PORT=$(node scripts/setup-dev-environment.js frontend) && export BACKEND_PORT=$(node scripts/setup-dev-environment.js backend) && concurrently \"npm run backend:dev:watch:qa\" \"npm run frontend:dev\"",
|
||||||
"generate-types": "cargo run --bin generate_types",
|
"generate-types": "cargo run --bin generate_types",
|
||||||
"generate-types:check": "cargo run --bin generate_types -- --check",
|
"generate-types:check": "cargo run --bin generate_types -- --check",
|
||||||
"prepare-db": "node scripts/prepare-db.js",
|
"prepare-db": "node scripts/prepare-db.js",
|
||||||
@@ -41,4 +42,4 @@
|
|||||||
"pnpm": ">=8"
|
"pnpm": ">=8"
|
||||||
},
|
},
|
||||||
"packageManager": "pnpm@10.13.1"
|
"packageManager": "pnpm@10.13.1"
|
||||||
}
|
}
|
||||||
Reference in New Issue
Block a user