Conversation
Add the config flag, storage primitives, atomic flagged persistence, import preparation hook, and sync-prune backup helper needed by the codex-multi-auth sync flow. Co-authored-by: Codex <noreply@openai.com>
📝 WalkthroughWalkthroughAdds cross-process and in-process locking and Windows-safe atomic writes for config mutations, new get/set helpers for an experimental sync toggle, expanded storage transactional/backup/preview APIs with robust file helpers, and a new token-redacting sync-prune backup module plus tests. Changes
Sequence Diagram(s)sequenceDiagram
participant Caller as Caller
participant InProcLock as In-Process Lock (configMutationMutex)
participant ProcLock as File Process Lock
participant FS as File System
participant JSON as JSON Parser/Validator
Caller->>InProcLock: withConfigMutationLock(mutation)
activate InProcLock
InProcLock->>ProcLock: withConfigProcessLock()
activate ProcLock
ProcLock->>FS: read CONFIG_PATH
FS-->>JSON: raw content
JSON->>JSON: parse & validate
JSON-->>ProcLock: config object
ProcLock->>Caller: apply mutation
Caller->>FS: write temp file
FS-->>ProcLock: temp file written
ProcLock->>FS: rename temp -> CONFIG_PATH (retry on EPERM/EBUSY)
FS-->>ProcLock: rename success
ProcLock-->>InProcLock: release
deactivate ProcLock
InProcLock-->>Caller: mutation result
deactivate InProcLock
sequenceDiagram
participant Caller as Caller
participant TxLayer as withFlaggedAccountsTransaction
participant Storage as Storage Layer
participant FS as File System
Caller->>TxLayer: withFlaggedAccountsTransaction(handler)
activate TxLayer
TxLayer->>Storage: loadFlaggedAccountsUnlocked()
Storage->>FS: read flagged file
FS-->>Storage: flagged data
Storage-->>TxLayer: current flagged snapshot
Caller->>Caller: handler modifies snapshot
Caller->>TxLayer: persist(updated)
TxLayer->>Storage: saveFlaggedAccountsUnlocked(updated)
Storage->>FS: write flagged file
FS-->>Storage: write success
Storage-->>TxLayer: persisted
TxLayer-->>Caller: transaction result
deactivate TxLayer
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Align config rename retry behavior with storage, log backup permission failures, and add regression coverage for the new foundation storage and config APIs. Co-authored-by: Codex <noreply@openai.com>
There was a problem hiding this comment.
1 issue found across 4 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="test/plugin-config.test.ts">
<violation number="1" location="test/plugin-config.test.ts:773">
P2: The malformed-config mutation test leaves `fs.promises.mkdir` unstubbed, so it touches the real filesystem before asserting parse failure.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
Document the non-reentrant flagged transaction helper and stub mkdir in the malformed config test. Co-authored-by: Codex <noreply@openai.com>
Avoid reopening the accounts file while building the combined snapshot and cover the single-read path with a regression test. Co-authored-by: Codex <noreply@openai.com>
There was a problem hiding this comment.
1 issue found across 2 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="lib/storage.ts">
<violation number="1" location="lib/storage.ts:1045">
P2: `accountsSnapshot ?? ...` incorrectly re-reads storage when the provided snapshot is `null`; treat only `undefined` as “no snapshot provided.”</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
Preserve null account snapshots, retry raw backup copies on Windows lock errors, and constrain prepare() to filtered imports with regression coverage. Co-authored-by: Codex <noreply@openai.com>
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/config.ts`:
- Around line 538-576: The savePluginConfigMutation flow
(savePluginConfigMutation, withConfigMutationLock, CONFIG_PATH, tempPath,
renameConfigWithWindowsRetry) is racy across processes because the in-memory
mutex doesn't prevent other processes from reading/modifying CONFIG_PATH; fix by
adding an OS-visible coordination step: either acquire a filesystem
lock/lockfile (e.g., create a .lock using O_EXCL or use a cross-process lock
library) around the read/modify/write, or implement optimistic concurrency by
after computing next and before rename re-reading CONFIG_PATH (or checking
mtime/checksum) and if it changed, recompute next from the fresh contents and
retry a bounded number of times; ensure temp file cleanup and that rename still
uses renameConfigWithWindowsRetry.
In `@lib/storage.ts`:
- Around line 1335-1341: importAccounts() currently trusts the prepare hook to
only filter the incoming normalized snapshot, but the hook may add/modify
accounts; enforce the subset invariant by validating after calling
prepare(normalized, existing) and normalizeAccountStorage(prepared) that every
account in preparedNormalized.accounts exists in the original
normalized.accounts (compare by stable unique key/identity used by
normalizeAccountStorage, e.g. account.id or the canonical identity field) and
that no account identities were rewritten; if any account is new or mismatched,
throw an error explaining prepare() must return a subset (alternatively narrow
the hook API to a filter-only callback), ensuring skippedByPrepare remains
accurate.
- Around line 1042-1058: The current removeOrphanedFlaggedAccounts uses the
nullish coalescing (accountsSnapshot ?? loadAccountsInternal(...)) which treats
an explicit null snapshot the same as "no snapshot provided" and causes an
unnecessary second read; change the logic to check for undefined specifically:
if accountsSnapshot === undefined then call
loadAccountsInternal(saveAccountsUnlocked), if accountsSnapshot === null then
treat as "accounts file absent" and return the original storage without loading,
and otherwise use the provided snapshot; update references in
removeOrphanedFlaggedAccounts and any call site logic around
loadAccountAndFlaggedStorageSnapshot, loadAccountsInternal, accountsSnapshot and
saveFlaggedAccountsUnlocked accordingly so explicit null stops further reads and
leaves flagged entries untouched.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 2ab3f5ea-a385-4e66-b430-1e37bad82fd9
📒 Files selected for processing (7)
lib/config.tslib/schemas.tslib/storage.tslib/sync-prune-backup.tstest/plugin-config.test.tstest/storage.test.tstest/sync-prune-backup.test.ts
Guard config mutations with a lockfile and enforce prepare() as a true subset of imported accounts. Co-authored-by: Codex <noreply@openai.com>
Reuse the loaded accounts snapshot in flagged transactions and cover the single-read path with a regression test. Co-authored-by: Codex <noreply@openai.com>
|
@coderabbitai review |
@ndycode I have started the AI code review. It will take a few minutes to complete. |
|
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
4 issues found across 7 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="lib/config.ts">
<violation number="1" location="lib/config.ts:135">
P1: Opening the lock file before ensuring the config directory exists can throw `ENOENT` on first run, so config writes fail on fresh environments.</violation>
</file>
<file name="test/plugin-config.test.ts">
<violation number="1" location="test/plugin-config.test.ts:784">
P2: The malformed-config test does not mock the lockfile `open`/`unlink` calls, so it depends on real `~/.opencode` state and may fail before hitting the invalid-JSON path.</violation>
</file>
<file name="test/sync-prune-backup.test.ts">
<violation number="1" location="test/sync-prune-backup.test.ts:82">
P2: This line reassigns the nested object instead of mutating it, so the deep-clone check can pass even when only a shallow clone is used.</violation>
</file>
<file name="lib/storage.ts">
<violation number="1" location="lib/storage.ts:1367">
P1: `prepare()` is called with live mutable storage objects, so it can mutate inputs before validation and bypass the subset constraint.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
Bound config lock retries, create the config directory before locking, and isolate tests around the lockfile and prune backup clone checks. Co-authored-by: Codex <noreply@openai.com>
There was a problem hiding this comment.
🧹 Nitpick comments (2)
lib/storage.ts (1)
1468-1469: Verify the skipped calculation accounts for all scenarios.The calculation
skipped = skippedByPrepare + Math.max(0, preparedNormalized.accounts.length - imported)may double-count in edge cases. Whenprepare()filters accounts and those filtered accounts would also have been deduplicated, the totalskippedcould exceednormalized.accounts.length - imported.Consider verifying the calculation against the original import file's account count:
- const skipped = skippedByPrepare + Math.max(0, preparedNormalized.accounts.length - imported); + const skipped = Math.max(0, normalized.accounts.length - imported);Alternatively, if the intent is to separately track prepare-filtered vs dedupe-skipped, the current approach works but may benefit from a comment explaining the semantics.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/storage.ts` around lines 1468 - 1469, The skipped count can double-count; replace the current line computing skipped with a single-source calculation using the original import count (e.g., use normalized.accounts.length) so skipped = Math.max(0, normalized.accounts.length - imported) rather than adding skippedByPrepare, or if you intentionally want two separate metrics keep the current math but add a clarifying comment next to skippedByPrepare and the skipped computation; update the line that currently references preparedNormalized.accounts.length and imported (and the variables deduplicatedAccounts, existingAccounts) to use normalized.accounts.length to accurately reflect total skipped from the original file.lib/config.ts (1)
133-164: Stale lockfile may block subsequent mutations if process crashes.If a process crashes or is killed between creating the lockfile and unlinking it (lines 140-146), the lockfile will remain and block all future config mutations until manually removed.
Consider adding stale lock detection (e.g., check lockfile age or write PID to it):
♻️ Proposed enhancement for stale lock handling
async function withConfigProcessLock<T>(fn: () => Promise<T>): Promise<T> { let lastError: NodeJS.ErrnoException | null = null; await fs.mkdir(dirname(CONFIG_PATH), { recursive: true }); for (let attempt = 0; attempt < CONFIG_LOCK_RETRY_ATTEMPTS; attempt += 1) { try { const handle = await fs.open(CONFIG_LOCK_PATH, "wx", 0o600); + // Write PID to help identify stale locks + await handle.write(`${process.pid}\n`); try { return await fn(); } finally { await handle.close(); await fs.unlink(CONFIG_LOCK_PATH).catch(() => undefined); } } catch (error) { const code = (error as NodeJS.ErrnoException).code; if (code === "EEXIST") { + // Check if lock is stale (e.g., > 30 seconds old) + try { + const stat = await fs.stat(CONFIG_LOCK_PATH); + if (Date.now() - stat.mtimeMs > 30_000) { + await fs.unlink(CONFIG_LOCK_PATH).catch(() => undefined); + continue; // Retry immediately after removing stale lock + } + } catch { + // Lock may have been released, retry + } lastError = error as NodeJS.ErrnoException; await new Promise((resolve) => setTimeout( resolve, Math.min(CONFIG_LOCK_RETRY_BASE_DELAY_MS * 2 ** attempt, CONFIG_LOCK_RETRY_MAX_DELAY_MS), ), ); continue; } throw error; } }
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@lib/storage.ts`:
- Around line 1468-1469: The skipped count can double-count; replace the current
line computing skipped with a single-source calculation using the original
import count (e.g., use normalized.accounts.length) so skipped = Math.max(0,
normalized.accounts.length - imported) rather than adding skippedByPrepare, or
if you intentionally want two separate metrics keep the current math but add a
clarifying comment next to skippedByPrepare and the skipped computation; update
the line that currently references preparedNormalized.accounts.length and
imported (and the variables deduplicatedAccounts, existingAccounts) to use
normalized.accounts.length to accurately reflect total skipped from the original
file.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 44407809-a551-40e2-9aca-1ced35cb2243
📒 Files selected for processing (5)
lib/config.tslib/storage.tstest/plugin-config.test.tstest/storage.test.tstest/sync-prune-backup.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- test/sync-prune-backup.test.ts
Retry through stale config lockfiles and cover first-run config-guard paths with focused tests. Co-authored-by: Codex <noreply@openai.com>
There was a problem hiding this comment.
Pull request overview
This PR adds the foundational storage/config plumbing needed for upcoming codex-multi-auth sync work, including safer cross-process config mutation, flagged-account transaction/snapshot helpers, import preview support, and backup utilities (raw and token-redacted).
Changes:
- Added process-visible + in-process locking for config mutations and introduced a sync feature toggle in config/schema.
- Added flagged-account transaction/snapshot helpers, import preview against an arbitrary existing snapshot, and raw accounts backup with Windows retry handling.
- Added prune-backup payload generation that deep-clones data and redacts token fields, plus accompanying tests.
Reviewed changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| lib/config.ts | Adds config mutation locking + persisted sync toggle; introduces Windows rename retry helper. |
| lib/schemas.ts | Extends plugin config schema with experimental.syncFromCodexMultiAuth.enabled. |
| lib/storage.ts | Adds flagged account helpers, import preview helper, and raw backup helper with Windows retry. |
| lib/sync-prune-backup.ts | New helper to create token-redacted, deep-cloned prune-backup payloads. |
| test/plugin-config.test.ts | Adds tests for config mutation lock/rename behavior and persisted sync toggle reads. |
| test/storage.test.ts | Adds tests for import prepare/subset enforcement, raw backup, and snapshot reuse. |
| test/sync-prune-backup.test.ts | Adds tests for token redaction and deep-clone isolation of prune payload. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| await fs.unlink(CONFIG_LOCK_PATH).catch(() => undefined); | ||
| continue; | ||
| } | ||
| } catch (statError) { | ||
| const statCode = (statError as NodeJS.ErrnoException).code; | ||
| if (statCode === "ENOENT") { |
| async function renameConfigWithWindowsRetry(sourcePath: string, destinationPath: string): Promise<void> { | ||
| let lastError: NodeJS.ErrnoException | null = null; | ||
| for (let attempt = 0; attempt < 5; attempt += 1) { | ||
| try { | ||
| await fs.rename(sourcePath, destinationPath); | ||
| return; | ||
| } catch (error) { | ||
| const code = (error as NodeJS.ErrnoException).code; | ||
| if (code === "EPERM" || code === "EBUSY") { | ||
| lastError = error as NodeJS.ErrnoException; | ||
| await new Promise((resolve) => setTimeout(resolve, 10 * 2 ** attempt)); | ||
| continue; | ||
| } | ||
| throw error; | ||
| } | ||
| } |
lib/storage.ts
Outdated
|
|
||
| export async function saveFlaggedAccounts(storage: FlaggedAccountStorageV1): Promise<void> { | ||
| export async function loadFlaggedAccounts(): Promise<FlaggedAccountStorageV1> { | ||
| return withStorageLock(async () => loadFlaggedAccountsUnlocked()); |
| export async function backupRawAccountsFile(filePath: string, force = true): Promise<void> { | ||
| await withStorageLock(async () => { | ||
| const resolvedPath = resolvePath(filePath); | ||
|
|
||
| if (!force && existsSync(resolvedPath)) { | ||
| throw new Error(`File already exists: ${resolvedPath}`); | ||
| } | ||
|
|
||
| await migrateLegacyProjectStorageIfNeeded(saveAccountsUnlocked); | ||
| const storagePath = getStoragePath(); | ||
| if (!existsSync(storagePath)) { | ||
| throw new Error("No accounts to back up"); | ||
| } | ||
|
|
||
| const deduplicatedAccounts = deduplicateAccountsForStorage(merged); | ||
| const imported = deduplicatedAccounts.length - existingAccounts.length; | ||
| const skipped = normalized.accounts.length - imported; | ||
| return Promise.resolve({ | ||
| imported, | ||
| total: deduplicatedAccounts.length, | ||
| skipped, | ||
| await fs.mkdir(dirname(resolvedPath), { recursive: true }); | ||
| await copyFileWithWindowsRetry(storagePath, resolvedPath); | ||
| await fs.chmod(resolvedPath, 0o600).catch((chmodErr) => { |
Summary
codex-multi-authsyncSafety Notes
EBUSY/EPERM, and backup permission-restriction failures are logged instead of being swallowedprepare()is enforced as a true subset of the imported accounts, including identity preservation, and is evaluated against cloned inputs so caller mutation cannot bypass validationValidation
npm run typechecknpm run lintnpx vitest run test/storage.test.ts test/plugin-config.test.ts test/sync-prune-backup.test.ts --reporter=dotNotes
#75#77Summary by CodeRabbit
New Features
Improvements
note: greptile review for oc-chatgpt-multi-auth. cite files like
lib/foo.ts:123. confirm regression tests + windows concurrency/token redaction coverage.Greptile Summary
this PR lays the config/storage foundation for the
codex-multi-authsync engine landing in #77. it adds a two-layer config mutation path (in-process promise-chain mutex + process-visible lockfile with stale-lock cleanup and bounded backoff), flagged-account transaction and snapshot helpers that reuse a single accounts read inside the storage lock, import preview support, and raw/token-redacted backup helpers. theprepare()subset guard and deep-clone isolation in the import path are solid.issues found:
exportAccountsTOCTOU race onforce=false—existsSyncis called outside any exclusive write, thenfs.writeFileuses flagw(notwx). a concurrent exporter or any process creating the file between the check and the write silently clobbers it with no error. the fix is to usefs.open(path, 'wx')for atomic exclusive creation.writePreImportBackupFilemissing post-rename chmod + warn log —backupRawAccountsFilecallsfs.chmod(0o600)after the write and emits alog.warnwhen chmod fails (visible on Windows wheremode: 0o600inwriteFileis a no-op).writePreImportBackupFiledoes neither, so token-bearing pre-import backups land world-readable on Windows with no diagnostic log entry.missing vitest coverage for concurrent
withConfigMutationLockserialization — the process-lock paths are well-exercised, but no test fires two concurrentsetSyncFromCodexMultiAuthEnabledcalls to verify the in-process mutex queues them correctly without a torn write.Confidence Score: 3/5
exportAccountsshould be resolved before the sync engine lands on topprepare()subset enforcement and structuredClone isolation are correct. score is reduced because theexportAccountsTOCTOU is a logic bug (broken contract forforce=false) and the missing chmod+log inwritePreImportBackupFileleaves token-bearing backup files silently world-readable on Windows — the exact risk class called out in the repo's own safety notesImportant Files Changed
savePluginConfigMutationwith two-layer locking (in-process promise mutex + process-visible lockfile with stale-lock cleanup), temp-file rename, andsetSyncFromCodexMultiAuthEnabled; the locking mechanism is solid but the in-process mutex lacks a concurrency regression testexportAccountshas a TOCTOU race on theforce=falseguard, andwritePreImportBackupFileomits the post-rename chmod + warn logging thatbackupRawAccountsFilealready has for Windows token safetyexperimental.syncFromCodexMultiAuth.enabledtoPluginConfigSchema; minimal and correctloadAccountAndFlaggedStorageSnapshotandwithFlaggedAccountsTransaction; no coverage forexportAccountsTOCTOU or pre-import backup chmod on WindowssetSyncFromCodexMultiAuthEnabledprocess-lock paths; missing concurrent in-process mutex serialization testSequence Diagram
sequenceDiagram participant Caller participant withConfigMutationLock participant withConfigProcessLock participant ConfigFile Caller->>withConfigMutationLock: setSyncFromCodexMultiAuthEnabled(enabled) withConfigMutationLock->>withConfigMutationLock: queue behind previous promise withConfigMutationLock->>withConfigProcessLock: fn() withConfigProcessLock->>withConfigProcessLock: fs.mkdir (config dir) loop retry up to 5x withConfigProcessLock->>ConfigFile: open(.lock, "wx") alt EEXIST + stale withConfigProcessLock->>ConfigFile: unlink(.lock) else EEXIST + fresh withConfigProcessLock->>withConfigProcessLock: exponential backoff delay end end withConfigProcessLock->>ConfigFile: readFile (current config) withConfigProcessLock->>ConfigFile: writeFile (.pid.ts.tmp, mode 0o600) withConfigProcessLock->>ConfigFile: rename(.tmp → config) withConfigProcessLock->>ConfigFile: unlink(.lock) withConfigMutationLock->>withConfigMutationLock: release in-process mutex Note over Caller,ConfigFile: importAccounts flow Caller->>withAccountStorageTransaction: importAccounts(filePath, options, prepare?) withAccountStorageTransaction->>withAccountStorageTransaction: withStorageLock acquired withAccountStorageTransaction->>withAccountStorageTransaction: prepare() subset validation withAccountStorageTransaction->>ConfigFile: writePreImportBackupFile (mode 0o600, NO chmod post-rename) withAccountStorageTransaction->>ConfigFile: persist merged storage withAccountStorageTransaction->>withAccountStorageTransaction: release lockComments Outside Diff (4)
lib/config.ts, line 153-155 (link)stale-lock
continueeats retry budgeteach stale-lock removal path does
continue, which still incrementsattemptin the for-loop. withCONFIG_LOCK_RETRY_ATTEMPTS = 5, five consecutive stale-lock detections exhaust all retries without ever acquiring the lock — the caller gets a timeout error even though no live contention existed.the fix is to avoid incrementing the attempt counter after a successful stale cleanup — either by not counting the attempt on a successful unlink/continue, or by resetting the attempt counter:
lib/storage.ts, line 1138-1139 (link)loadFlaggedAccountsstill triggers double-read of accounts fileloadFlaggedAccountsUnlocked()is called here without anaccountsSnapshot. insideremoveOrphanedFlaggedAccounts,accountsSnapshot === undefinedcauses a secondloadAccountsInternal(saveAccountsUnlocked)— a second open of the token-bearing accounts file within the same storage-lock window.this is the identical race fixed for
withFlaggedAccountsTransaction(line 1155) andloadAccountAndFlaggedStorageSnapshot(line 1168), but not carried over here. on windows with av scanning, the second rapid open risksEBUSYwith no retry guard.fix matches what the other two callers already do:
a regression test mirroring the
accountReadCount === 1check in"reuses the loaded accounts snapshot inside withFlaggedAccountsTransaction"should be added for this path too.lib/storage.ts, line 1295-1297 (link)TOCTOU race on
existsSyncwhenforce=falsewithStorageLockis a single-process promise-chain mutex; it does not prevent a separate OS process from creatingresolvedPathbetween the syncexistsSynccheck and thecopyFileWithWindowsRetrycall. on windows, a second backup process (or av move of a temp file) can create the destination in that window, and the raw token-bearing backup is silently overwritten even thoughforce=falsewas explicitly requested.use
fs.openwith thewxflag to make the "create only if absent" check atomic:a vitest covering the cross-process race (two concurrent
backupRawAccountsFile(path, false)calls) is missing.lib/storage.ts, line 1337-1349 (link)TOCTOU race on
force=falseguardexistsSyncat line 1337 is checked outside any exclusive write, thenwithAccountStorageTransactionacquires the storage lock, thenfs.writeFileis called. between theexistsSyncand the finalwriteFilethere is a visible window (at minimum, the full duration of the storage transaction) where another process or concurrent call can create the file. when that happens,fs.writeFilewith its defaultwflag silently clobbers the file, breaking theforce=falsecontract with no error.on Windows this race is worse because antivirus or indexing can hold the file long enough for a second exporter to slip through.
the correct fix is to replace the
existsSynccheck +writeFilepair with an exclusiveopen('wx')call, which is atomic at the OS level:Prompt To Fix All With AI
Last reviewed commit: 2114491
Context used: