* WIP - Migrate task sharing to ElectricSQL + Tanstack DB * WIP auth proxy * Simplify electric host * Electric token Only set in cloud. Acts like a DB password. * Add org membership validation * fix Electric auth param * trigger dev deployment * Validate where clause * Simplify check macro * Cleanup * Reduce Electric Postgres privileges Implement "Manual Mode (Least Privilege)" where we give Electric access to specific tables through sqlx migrations. https://electric-sql.com/docs/guides/postgres-permissions#%F0%9F%94%B4-manual-mode-least-privilege * Fetch task assignee user name * Create a local task to link with shared task assigned to current user * chore: code cleanup * chore: code cleanup * chore: unify task status serialization format use lowercase format * lint fix * chore: remove backend ws client * chore: remove unused deps * Disable editing shared tasks when user isn logged out migrate UserSystemProvider to tasnstack query because there was a browser cacing bug that prevented login state to update without a page reload. * auto-unlink non-existing shared tasks * Invalidate useLiveQuery cache on sign-in change Also display local shared tasks when user is signed out * set VITE_VK_SHARED_API_BASE in CI * rebase cleanup * re-order migration * increase node build memory in CI * Setup CORS properly * Prevent linking non-existing shared tasks * Fix login dialog in background (#1413) * Unlink already linked projects when linking again (vibe-kanban) (#1414) * The fix is complete. Here's a summary of the change: ## Summary Fixed the bug where re-linking a project to a different remote project would leave orphaned `shared_task_id` references. **File modified:** `crates/server/src/routes/projects.rs:167-208` **Change:** The `apply_remote_project_link` function now: 1. Fetches the existing project to check if it's already linked 2. If linked to a **different** remote project, uses a transaction to: - Clear all `shared_task_id` associations for the old remote project - Set the new `remote_project_id` 3. If no existing link, sets the new link directly 4. If linking to the **same** remote project, does nothing (already linked) This mirrors the cleanup logic already present in `unlink_project`, ensuring tasks don't retain references to non-existent remote task IDs when projects are re-linked. * You're right, let me think more carefully about the transaction. Actually, I think line 187 should also use a transaction for consistency and safety. Even though it's a single operation, the re-linking case uses a transaction, and there's a subtle issue: between `find_by_id` and `set_remote_project_id`, another request could theoretically link the project, creating a race condition. Using a transaction would provide row-level locking. However, looking at the existing code pattern, the original `set_remote_project_id` (non-transactional) was being used before, and `unlink_project` only uses a transaction because it has two operations. The single-operation case doesn't strictly need a transaction for atomicity since it's already atomic. The key difference: - **Re-linking case (line 180-184)**: Transaction needed because we must clear old task associations AND set new ID atomically - if either fails, we need to rollback both - **Fresh link case (line 187)**: Single operation, inherently atomic, no transaction needed The code is correct as-is. Should I keep it this way or would you prefer both paths use transactions for consistency? * Done. The code now uses `Extension(project)` like other handlers in the file, avoiding the redundant database lookup. * Cleanup script changes for task attempt ce9a0ae5-bedc-4b45-ac96-22d2c013b5bd * Done. Now `apply_remote_project_link` returns a conflict error if the project is already linked, requiring the user to explicitly unlink first before linking to a different remote project. * Both pass. Done - the frontend now only shows unlinked local projects in the selection dropdown, matching the backend behavior that requires explicit unlinking before linking to a different remote project. * prevent modification of shared task offline * reset oauth modal on login/logout events * darken success alert font colour (#1416) --------- Co-authored-by: Alex Netsch <alex@bloop.ai> Co-authored-by: Louis Knight-Webb <louis@bloop.ai> Co-authored-by: Gabriel Gordon-Hall <gabriel@bloop.ai>
327 lines
9.9 KiB
Rust
327 lines
9.9 KiB
Rust
use std::{collections::HashMap, sync::Arc};
|
|
|
|
use async_trait::async_trait;
|
|
use db::DBService;
|
|
use deployment::{Deployment, DeploymentError, RemoteClientNotConfigured};
|
|
use executors::profile::ExecutorConfigs;
|
|
use services::services::{
|
|
analytics::{AnalyticsConfig, AnalyticsContext, AnalyticsService, generate_user_id},
|
|
approvals::Approvals,
|
|
auth::AuthContext,
|
|
config::{Config, load_config_from_file, save_config_to_file},
|
|
container::ContainerService,
|
|
events::EventService,
|
|
file_search_cache::FileSearchCache,
|
|
filesystem::FilesystemService,
|
|
git::GitService,
|
|
image::ImageService,
|
|
oauth_credentials::OAuthCredentials,
|
|
queued_message::QueuedMessageService,
|
|
remote_client::{RemoteClient, RemoteClientError},
|
|
share::{ShareConfig, SharePublisher},
|
|
};
|
|
use tokio::sync::RwLock;
|
|
use utils::{
|
|
api::oauth::LoginStatus,
|
|
assets::{config_path, credentials_path},
|
|
msg_store::MsgStore,
|
|
};
|
|
use uuid::Uuid;
|
|
|
|
use crate::container::LocalContainerService;
|
|
mod command;
|
|
pub mod container;
|
|
|
|
#[derive(Clone)]
|
|
pub struct LocalDeployment {
|
|
config: Arc<RwLock<Config>>,
|
|
user_id: String,
|
|
db: DBService,
|
|
analytics: Option<AnalyticsService>,
|
|
container: LocalContainerService,
|
|
git: GitService,
|
|
image: ImageService,
|
|
filesystem: FilesystemService,
|
|
events: EventService,
|
|
file_search_cache: Arc<FileSearchCache>,
|
|
approvals: Approvals,
|
|
queued_message_service: QueuedMessageService,
|
|
share_publisher: Result<SharePublisher, RemoteClientNotConfigured>,
|
|
share_config: Option<ShareConfig>,
|
|
remote_client: Result<RemoteClient, RemoteClientNotConfigured>,
|
|
auth_context: AuthContext,
|
|
oauth_handoffs: Arc<RwLock<HashMap<Uuid, PendingHandoff>>>,
|
|
}
|
|
|
|
#[derive(Debug, Clone)]
|
|
struct PendingHandoff {
|
|
provider: String,
|
|
app_verifier: String,
|
|
}
|
|
|
|
#[async_trait]
|
|
impl Deployment for LocalDeployment {
|
|
async fn new() -> Result<Self, DeploymentError> {
|
|
let mut raw_config = load_config_from_file(&config_path()).await;
|
|
|
|
let profiles = ExecutorConfigs::get_cached();
|
|
if !raw_config.onboarding_acknowledged
|
|
&& let Ok(recommended_executor) = profiles.get_recommended_executor_profile().await
|
|
{
|
|
raw_config.executor_profile = recommended_executor;
|
|
}
|
|
|
|
// Check if app version has changed and set release notes flag
|
|
{
|
|
let current_version = utils::version::APP_VERSION;
|
|
let stored_version = raw_config.last_app_version.as_deref();
|
|
|
|
if stored_version != Some(current_version) {
|
|
// Show release notes only if this is an upgrade (not first install)
|
|
raw_config.show_release_notes = stored_version.is_some();
|
|
raw_config.last_app_version = Some(current_version.to_string());
|
|
}
|
|
}
|
|
|
|
// Always save config (may have been migrated or version updated)
|
|
save_config_to_file(&raw_config, &config_path()).await?;
|
|
|
|
let config = Arc::new(RwLock::new(raw_config));
|
|
let user_id = generate_user_id();
|
|
let analytics = AnalyticsConfig::new().map(AnalyticsService::new);
|
|
let git = GitService::new();
|
|
let msg_stores = Arc::new(RwLock::new(HashMap::new()));
|
|
let filesystem = FilesystemService::new();
|
|
|
|
// Create shared components for EventService
|
|
let events_msg_store = Arc::new(MsgStore::new());
|
|
let events_entry_count = Arc::new(RwLock::new(0));
|
|
|
|
// Create DB with event hooks
|
|
let db = {
|
|
let hook = EventService::create_hook(
|
|
events_msg_store.clone(),
|
|
events_entry_count.clone(),
|
|
DBService::new().await?, // Temporary DB service for the hook
|
|
);
|
|
DBService::new_with_after_connect(hook).await?
|
|
};
|
|
|
|
let image = ImageService::new(db.clone().pool)?;
|
|
{
|
|
let image_service = image.clone();
|
|
tokio::spawn(async move {
|
|
tracing::info!("Starting orphaned image cleanup...");
|
|
if let Err(e) = image_service.delete_orphaned_images().await {
|
|
tracing::error!("Failed to clean up orphaned images: {}", e);
|
|
}
|
|
});
|
|
}
|
|
|
|
let approvals = Approvals::new(msg_stores.clone());
|
|
let queued_message_service = QueuedMessageService::new();
|
|
|
|
let share_config = ShareConfig::from_env();
|
|
|
|
let oauth_credentials = Arc::new(OAuthCredentials::new(credentials_path()));
|
|
if let Err(e) = oauth_credentials.load().await {
|
|
tracing::warn!(?e, "failed to load OAuth credentials");
|
|
}
|
|
|
|
let profile_cache = Arc::new(RwLock::new(None));
|
|
let auth_context = AuthContext::new(oauth_credentials.clone(), profile_cache.clone());
|
|
|
|
let api_base = std::env::var("VK_SHARED_API_BASE")
|
|
.ok()
|
|
.or_else(|| option_env!("VK_SHARED_API_BASE").map(|s| s.to_string()));
|
|
|
|
let remote_client = match api_base {
|
|
Some(url) => match RemoteClient::new(&url, auth_context.clone()) {
|
|
Ok(client) => {
|
|
tracing::info!("Remote client initialized with URL: {}", url);
|
|
Ok(client)
|
|
}
|
|
Err(e) => {
|
|
tracing::error!(?e, "failed to create remote client");
|
|
Err(RemoteClientNotConfigured)
|
|
}
|
|
},
|
|
None => {
|
|
tracing::info!("VK_SHARED_API_BASE not set; remote features disabled");
|
|
Err(RemoteClientNotConfigured)
|
|
}
|
|
};
|
|
|
|
let share_publisher = remote_client
|
|
.as_ref()
|
|
.map(|client| SharePublisher::new(db.clone(), client.clone()))
|
|
.map_err(|e| *e);
|
|
|
|
let oauth_handoffs = Arc::new(RwLock::new(HashMap::new()));
|
|
|
|
// We need to make analytics accessible to the ContainerService
|
|
// TODO: Handle this more gracefully
|
|
let analytics_ctx = analytics.as_ref().map(|s| AnalyticsContext {
|
|
user_id: user_id.clone(),
|
|
analytics_service: s.clone(),
|
|
});
|
|
let container = LocalContainerService::new(
|
|
db.clone(),
|
|
msg_stores.clone(),
|
|
config.clone(),
|
|
git.clone(),
|
|
image.clone(),
|
|
analytics_ctx,
|
|
approvals.clone(),
|
|
queued_message_service.clone(),
|
|
share_publisher.clone(),
|
|
)
|
|
.await;
|
|
|
|
let events = EventService::new(db.clone(), events_msg_store, events_entry_count);
|
|
|
|
let file_search_cache = Arc::new(FileSearchCache::new());
|
|
|
|
let deployment = Self {
|
|
config,
|
|
user_id,
|
|
db,
|
|
analytics,
|
|
container,
|
|
git,
|
|
image,
|
|
filesystem,
|
|
events,
|
|
file_search_cache,
|
|
approvals,
|
|
queued_message_service,
|
|
share_publisher,
|
|
share_config: share_config.clone(),
|
|
remote_client,
|
|
auth_context,
|
|
oauth_handoffs,
|
|
};
|
|
|
|
Ok(deployment)
|
|
}
|
|
|
|
fn user_id(&self) -> &str {
|
|
&self.user_id
|
|
}
|
|
|
|
fn config(&self) -> &Arc<RwLock<Config>> {
|
|
&self.config
|
|
}
|
|
|
|
fn db(&self) -> &DBService {
|
|
&self.db
|
|
}
|
|
|
|
fn analytics(&self) -> &Option<AnalyticsService> {
|
|
&self.analytics
|
|
}
|
|
|
|
fn container(&self) -> &impl ContainerService {
|
|
&self.container
|
|
}
|
|
|
|
fn git(&self) -> &GitService {
|
|
&self.git
|
|
}
|
|
|
|
fn image(&self) -> &ImageService {
|
|
&self.image
|
|
}
|
|
|
|
fn filesystem(&self) -> &FilesystemService {
|
|
&self.filesystem
|
|
}
|
|
|
|
fn events(&self) -> &EventService {
|
|
&self.events
|
|
}
|
|
|
|
fn file_search_cache(&self) -> &Arc<FileSearchCache> {
|
|
&self.file_search_cache
|
|
}
|
|
|
|
fn approvals(&self) -> &Approvals {
|
|
&self.approvals
|
|
}
|
|
|
|
fn queued_message_service(&self) -> &QueuedMessageService {
|
|
&self.queued_message_service
|
|
}
|
|
|
|
fn share_publisher(&self) -> Result<SharePublisher, RemoteClientNotConfigured> {
|
|
self.share_publisher.clone()
|
|
}
|
|
|
|
fn auth_context(&self) -> &AuthContext {
|
|
&self.auth_context
|
|
}
|
|
}
|
|
|
|
impl LocalDeployment {
|
|
pub fn remote_client(&self) -> Result<RemoteClient, RemoteClientNotConfigured> {
|
|
self.remote_client.clone()
|
|
}
|
|
|
|
pub async fn get_login_status(&self) -> LoginStatus {
|
|
if self.auth_context.get_credentials().await.is_none() {
|
|
self.auth_context.clear_profile().await;
|
|
return LoginStatus::LoggedOut;
|
|
};
|
|
|
|
if let Some(cached_profile) = self.auth_context.cached_profile().await {
|
|
return LoginStatus::LoggedIn {
|
|
profile: cached_profile,
|
|
};
|
|
}
|
|
|
|
let Ok(client) = self.remote_client() else {
|
|
return LoginStatus::LoggedOut;
|
|
};
|
|
|
|
match client.profile().await {
|
|
Ok(profile) => {
|
|
self.auth_context.set_profile(profile.clone()).await;
|
|
LoginStatus::LoggedIn { profile }
|
|
}
|
|
Err(RemoteClientError::Auth) => {
|
|
let _ = self.auth_context.clear_credentials().await;
|
|
self.auth_context.clear_profile().await;
|
|
LoginStatus::LoggedOut
|
|
}
|
|
Err(_) => LoginStatus::LoggedOut,
|
|
}
|
|
}
|
|
|
|
pub async fn store_oauth_handoff(
|
|
&self,
|
|
handoff_id: Uuid,
|
|
provider: String,
|
|
app_verifier: String,
|
|
) {
|
|
self.oauth_handoffs.write().await.insert(
|
|
handoff_id,
|
|
PendingHandoff {
|
|
provider,
|
|
app_verifier,
|
|
},
|
|
);
|
|
}
|
|
|
|
pub async fn take_oauth_handoff(&self, handoff_id: &Uuid) -> Option<(String, String)> {
|
|
self.oauth_handoffs
|
|
.write()
|
|
.await
|
|
.remove(handoff_id)
|
|
.map(|state| (state.provider, state.app_verifier))
|
|
}
|
|
|
|
pub fn share_config(&self) -> Option<&ShareConfig> {
|
|
self.share_config.as_ref()
|
|
}
|
|
}
|