feat: exclusive audio solo logic and timeline simplification#134
feat: exclusive audio solo logic and timeline simplification#134mahdyarief wants to merge 2 commits intowebadderall:mainfrom
Conversation
- implemented exclusive soloing for master and audio regions\n- removed legacy move/select modes from timeline\n- enhanced audio track UI with waveform and fade indicators\n- consolidated audio settings panel
|
Warning
|
| Cohort / File(s) | Summary |
|---|---|
Audio UI & Waveform util src/components/video-editor/AudioSettingsPanel.tsx, src/utils/audioWaveform.ts |
New AudioSettingsPanel component (volume, mute/solo, fades, delete, waveform preview); new generateWaveform(audioPath, samples?) with in-memory cache and AudioContext decoding. |
Editor UI wiring & Settings src/components/video-editor/SettingsPanel.tsx, src/components/video-editor/VideoEditor.tsx |
Added "Audio" settings section and AudioSettingsPanel integration; master/per-track audio state, history snapshot inclusion, Whisper model selection/progress, IPC listeners for auto-caption progress/chunks, and master audio Web Audio routing. |
Timeline + Item + Row + Wrapper src/components/video-editor/timeline/TimelineEditor.tsx, src/components/video-editor/timeline/Item.tsx, src/components/video-editor/timeline/Row.tsx, src/components/video-editor/timeline/TimelineWrapper.tsx |
Added caption rows/items and audio item variant rendering (waveform, fade overlays); master and track audio controls (mute/solo/volume); selection and span update routing now include rowId; Row accepts controls prop; pointer/drag/resizer gating updated. |
Types & Preferences & Persistence src/components/video-editor/types.ts, src/components/video-editor/editorPreferences.ts, src/components/video-editor/projectPersistence.ts |
Extended types: AudioRegion (muted/soloed/fades), AnnotationType includes blur, WhisperModel type and defaults; editor preferences and persisted project editor state now include master/audio prefs and whisperSelectedModel. |
Small UI imports/aux src/components/video-editor/SettingsPanel.tsx (icons, caption cue UI additions) |
Added caption cue edit/seek UI, Indonesian caption option, generalized Whisper model callbacks and autoCaptionProgress handling. |
Sequence Diagram(s)
sequenceDiagram
participant UI as Settings Panel / Timeline
participant Video as Video Element
participant WA as Web Audio API
participant Region as Audio Region
UI->>WA: initAudioContext() on first playback
WA->>WA: create AudioContext & Master GainNode
Video->>WA: route video audio through master gain node
UI->>Region: Select audio region / request per-region node
Region->>WA: create/connect per-region GainNode -> master
UI->>WA: set volume/mute/solo/fade
WA->>WA: apply gain automation (setTargetAtTime)
WA->>Video: audio levels affect playback output
sequenceDiagram
participant User as User
participant UI as Settings Panel
participant IPC as Main Process
participant State as Editor State
User->>UI: Trigger auto-caption generation
UI->>IPC: generateCaptions(startTimeMs, durationMs, model)
IPC->>IPC: transcribe & chunk captions
IPC-->>UI: onAutoCaptionProgress(percent)
UI->>State: update progress UI
IPC-->>UI: onAutoCaptionChunk(cues)
UI->>State: append/sort/assign UUIDs to cues
User->>UI: select/edit caption cue
UI->>State: onAutoCaptionsChange(updatedCues)
Estimated code review effort
🎯 4 (Complex) | ⏱️ ~75 minutes
Possibly related PRs
- feat: Advanced Video Editor Implementation, Native WGC Integration, and AI Auto-Captions #122 — Directly related: modifies the same audio/caption flows (AudioSettingsPanel, waveform generation, Web Audio routing, timeline/caption wiring).
- Revert "feat: Advanced Video Editor Implementation, Native WGC Integration, and AI Auto-Captions" #124 — Directly related: a revert/removal PR targeting the audio UI, waveform util, and accompanying audio/caption type and UI changes.
Poem
🐇 I hopped along the waveform trail,
toggled solo, watched volumes sail,
fades like morning mist prevail,
captions whispered, neatly scaled,
a rabbit cheers — the edits hail!
🚥 Pre-merge checks | ✅ 2 | ❌ 3
❌ Failed checks (1 warning, 2 inconclusive)
| Check name | Status | Explanation | Resolution |
|---|---|---|---|
| Docstring Coverage | Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. | Write docstrings for the functions missing them to satisfy the coverage threshold. | |
| Linked Issues check | ❓ Inconclusive | The PR description links issue #120 and the testing guide aligns with exclusive solo behavior; however, the description does not explicitly confirm changelog updates despite the checklist indicating that requirement. |
Confirm whether the changelog has been updated per the linked issue; if not, add an entry documenting the exclusive solo feature and audio UI enhancements. |
| Out of Scope Changes check | ❓ Inconclusive | The changeset adds audio-specific UI, waveform generation, and exclusive solo logic aligned with the stated objectives; however, modifications to blur annotation handling (blur type addition and blurIntensity) appear tangential to the audio solo/timeline simplification goals. | Clarify whether annotation blur enhancements are necessary for this PR or if they should be split into a separate commit/PR to maintain focus on audio and timeline changes. |
✅ Passed checks (2 passed)
| Check name | Status | Explanation |
|---|---|---|
| Title check | ✅ Passed | The title accurately captures the two main features: exclusive audio solo logic (evident from solo handler updates and master selection state) and timeline simplification (removal of autoSuggestZoomsTrigger and keyboard shortcut logic). |
| Description check | ✅ Passed | The description covers purpose, motivation, type of change, and related issue; includes screenshots demonstrating the audio UI changes and a testing guide with verification steps for the solo functionality. |
✏️ Tip: You can configure your own custom pre-merge checks in the settings.
✨ Finishing Touches
🧪 Generate unit tests (beta)
- Create PR with unit tests
⚔️ Resolve merge conflicts
- Resolve merge conflict in branch
feat/audio-editor
Comment @coderabbitai help to get the list of available commands and usage tips.
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (2)
src/components/video-editor/SettingsPanel.tsx (2)
1819-1829: Consider extracting no-op callbacks to avoid unnecessary re-renders.The inline empty arrow functions
() => {}create new function references on every render, which may causeAudioSettingsPanelto re-render unnecessarily if it uses referential equality checks.🔧 Proposed fix — define stable no-ops at module level
Add at module level (e.g., near the top of the file):
const NOOP = () => {};Then use it in the master audio case:
return ( <AudioSettingsPanel audio={masterAudioMock} - onVolumeChange={onMasterAudioVolumeChange || (() => {})} - onMutedChange={onMasterAudioMutedChange || (() => {})} - onSoloedChange={onMasterAudioSoloedChange || (() => {})} - onFadeInMsChange={() => {}} - onFadeOutMsChange={() => {}} - onDelete={() => {}} + onVolumeChange={onMasterAudioVolumeChange ?? NOOP} + onMutedChange={onMasterAudioMutedChange ?? NOOP} + onSoloedChange={onMasterAudioSoloedChange ?? NOOP} + onFadeInMsChange={NOOP} + onFadeOutMsChange={NOOP} + onDelete={NOOP} /> );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/video-editor/SettingsPanel.tsx` around lines 1819 - 1829, The inline empty arrow functions passed to AudioSettingsPanel (for props onFadeInMsChange, onFadeOutMsChange, onDelete and the fallbacks onMasterAudioVolumeChange/onMasterAudioMutedChange/onMasterAudioSoloedChange) create new references each render; define a single stable module-level no-op (e.g., const NOOP = () => {}) near the top of the file and replace all occurrences of () => {} in the masterAudioMock return with NOOP (and use onMasterAudioVolumeChange ?? NOOP, etc.) so the component receives stable function references and avoids unnecessary re-renders.
1469-1489: Avoidas anycast; use the proper type.The
as anycast on line 1473 bypasses type safety. Cast to the explicit union type instead.🔧 Proposed fix
<Select value={autoCaptionSettings.selectedModel || "small"} - onValueChange={(value) => updateAutoCaptionSettings({ selectedModel: value as any })} + onValueChange={(value) => updateAutoCaptionSettings({ selectedModel: value as WhisperModelInfo["value"] })} >🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/video-editor/SettingsPanel.tsx` around lines 1469 - 1489, Replace the unsafe `as any` cast by using an explicit union type derived from WHISPER_MODEL_OPTIONS: define a type like WhisperModel = typeof WHISPER_MODEL_OPTIONS[number]["value"] (or import an existing model union), then change the Select onValueChange to cast to that type: onValueChange={(value) => updateAutoCaptionSettings({ selectedModel: value as WhisperModel })}; also ensure autoCaptionSettings.selectedModel is typed as WhisperModel | undefined so Select value and updateAutoCaptionSettings (the selectedModel field) use the proper union type instead of any.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/components/video-editor/timeline/TimelineEditor.tsx`:
- Around line 1690-1697: The master waveform is receiving a double-encoded
file:// URL because videoAudio.audioPath is passed as the already-encoded
videoPath into the master-waveform path which calls toFileUrl(); update the
TimelineEditor.tsx construction of videoAudio (the videoAudio constant / item
with id 'original-video-audio') to pass a raw filesystem path (or normalize with
fromFileUrl(videoPath) before assigning to audioPath) so generateWaveform() /
toFileUrl() receives a plain path; ensure audioPath contains the unencoded
source path expected by generateWaveform() rather than the existing file:// URL.
In `@src/components/video-editor/VideoEditor.tsx`:
- Around line 1162-1166: The master-audio state values (masterAudioMuted,
masterAudioSoloed, masterAudioVolume, audioTrackVolume, and isMasterSelected)
are being read into React state (via setMasterAudioMuted, setMasterAudioSoloed,
setMasterAudioVolume, setAudioTrackVolume, setIsMasterSelected) but are not
included in the editor snapshot/serialization or in the change-detection logic,
so they don't persist and don't mark hasUnsavedChanges. Update the editor
serialization and snapshot/save path (the function that builds the persisted
editor object and the routine that compares currentEditor vs savedEditor for
unsaved changes) to include those exact keys (masterAudioMuted,
masterAudioSoloed, masterAudioVolume, audioTrackVolume, isMasterSelected), and
ensure any saveEditor/saveSnapshot method writes them out and the diff/equality
check considers them when setting hasUnsavedChanges.
- Around line 2484-2488: Blur annotations are currently selectable
(VideoEditor.tsx sets updatedRegion.blurIntensity) but aren’t rendered or
exported because the codebase only handles text/image/figure; add support for
"blur" across the three places: update annotationRenderer (handle type ===
"blur" to render a blurred rectangle/mask over the region using CSS filter:
blur(...) or canvas blurring so it appears in the editor and exports), add UI
controls in AnnotationSettingsPanel.tsx to edit blurIntensity (e.g.,
slider/input bound to region.blurIntensity) and any size/shape settings, and
update AnnotationOverlay.tsx to draw the blur preview in the overlay layer
(matching region position/shape and current blurIntensity) and ensure the
export/serialization path includes type: "blur" and blurIntensity so blurred
regions persist in exports.
- Around line 1493-1511: When autoCaptionSettings.selectedModel changes, the
effect uses setWhisperModelPath((currentPath) => currentPath ?? result.path ??
null) which preserves a previous non-null whisperModelPath and can point
generation at the wrong file; change the logic so that after calling
window.electronAPI.getWhisperModelStatus(autoCaptionSettings.selectedModel) you
explicitly clear the path when result.exists is false or result.path is missing
by calling setWhisperModelPath(null), and when result.exists && result.path set
the path to result.path and update setWhisperModelDownloadStatus and
setWhisperModelDownloadProgress as before; reference setWhisperModelPath,
getWhisperModelStatus, autoCaptionSettings.selectedModel,
setWhisperModelDownloadStatus, and setWhisperModelDownloadProgress.
- Around line 1475-1491: The VideoEditor useEffect calls
window.electronAPI.onAutoCaptionProgress and onAutoCaptionChunk but preload.ts
doesn't expose those APIs, causing a runtime throw; update electron/preload.ts
to expose two functions named onAutoCaptionProgress and onAutoCaptionChunk via
contextBridge.exposeInMainWorld (on the electronAPI object) that wire to
ipcRenderer.on for the corresponding channel names and return an unsubscribe
function that calls ipcRenderer.removeListener, and ensure their TypeScript
signatures match the payload shape used in VideoEditor (payload: { progress:
number } and ({ cues }) respectively) so the listeners from VideoEditor.tsx can
be registered without error.
- Around line 2652-2673: The audio elements are being forced with
audio.crossOrigin = "anonymous", which breaks createMediaElementSource() for
file:// URLs from toFileUrl(); update the logic in the block that
creates/initializes the Audio (the audio variable used with toFileUrl,
existing.set, and later routed via createMediaElementSource and
audioRegionNodesRef) to only set crossOrigin when the src is not a file:// URL
(e.g., check expectedSrc.startsWith('file://') and skip setting crossOrigin for
file URLs) or remove the unconditional assignment entirely so local file
playback can be routed through Web Audio API.
- Around line 3590-3596: The video element is being double-scaled because its
DOM volume is set using previewVolume * masterAudioVolume while the same
multiplier is applied via masterGainRef (created from createMediaElementSource);
change the DOM volume to be 1 (or 0 when muted/solo dictates) and let the
WebAudio gain node handle the scaling. Concretely, update the volume prop logic
(currently using previewVolume * masterAudioVolume) to use 1 when audible and 0
when muted/soloed (e.g. volume={masterAudioMuted || (hasGlobalSolo &&
!masterAudioSoloed) ? 0 : 1}), leaving masterGainRef to apply masterAudioVolume
* previewVolume; reference masterGainRef, createMediaElementSource,
masterAudioVolume, previewVolume, masterAudioMuted, masterAudioSoloed and the
audioRegions solo check to locate and fix the code.
---
Nitpick comments:
In `@src/components/video-editor/SettingsPanel.tsx`:
- Around line 1819-1829: The inline empty arrow functions passed to
AudioSettingsPanel (for props onFadeInMsChange, onFadeOutMsChange, onDelete and
the fallbacks
onMasterAudioVolumeChange/onMasterAudioMutedChange/onMasterAudioSoloedChange)
create new references each render; define a single stable module-level no-op
(e.g., const NOOP = () => {}) near the top of the file and replace all
occurrences of () => {} in the masterAudioMock return with NOOP (and use
onMasterAudioVolumeChange ?? NOOP, etc.) so the component receives stable
function references and avoids unnecessary re-renders.
- Around line 1469-1489: Replace the unsafe `as any` cast by using an explicit
union type derived from WHISPER_MODEL_OPTIONS: define a type like WhisperModel =
typeof WHISPER_MODEL_OPTIONS[number]["value"] (or import an existing model
union), then change the Select onValueChange to cast to that type:
onValueChange={(value) => updateAutoCaptionSettings({ selectedModel: value as
WhisperModel })}; also ensure autoCaptionSettings.selectedModel is typed as
WhisperModel | undefined so Select value and updateAutoCaptionSettings (the
selectedModel field) use the proper union type instead of any.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 7fa80560-67f7-4a64-a51c-685661d2f094
📒 Files selected for processing (7)
src/components/video-editor/AudioSettingsPanel.tsxsrc/components/video-editor/SettingsPanel.tsxsrc/components/video-editor/VideoEditor.tsxsrc/components/video-editor/timeline/Item.tsxsrc/components/video-editor/timeline/TimelineEditor.tsxsrc/components/video-editor/types.tssrc/utils/audioWaveform.ts
| const videoAudio: TimelineRenderItem[] = videoPath && totalMs > 0 ? [{ | ||
| id: 'original-video-audio', | ||
| rowId: ORIGINAL_AUDIO_ROW_ID, | ||
| span: { start: 0, end: totalMs }, | ||
| label: 'Original Audio', | ||
| variant: 'audio', | ||
| audioPath: videoPath, | ||
| muted: masterAudioMuted |
There was a problem hiding this comment.
Pass the raw source path into the master-waveform code path.
videoPath here is already a file:// URL from VideoEditor.tsx, but generateWaveform() treats audioPath as a filesystem path and runs it through toFileUrl() again. The master track therefore fetches a malformed file:///file%3A... URL and renders an empty waveform. Use the raw source path here, or normalize with fromFileUrl(videoPath) first.
Suggested fix
-import { toFileUrl } from "../projectPersistence";
+import { fromFileUrl, toFileUrl } from "../projectPersistence"; const videoAudio: TimelineRenderItem[] = videoPath && totalMs > 0 ? [{
id: 'original-video-audio',
rowId: ORIGINAL_AUDIO_ROW_ID,
span: { start: 0, end: totalMs },
label: 'Original Audio',
variant: 'audio',
- audioPath: videoPath,
+ audioPath: fromFileUrl(videoPath),
muted: masterAudioMuted
}] : [];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/video-editor/timeline/TimelineEditor.tsx` around lines 1690 -
1697, The master waveform is receiving a double-encoded file:// URL because
videoAudio.audioPath is passed as the already-encoded videoPath into the
master-waveform path which calls toFileUrl(); update the TimelineEditor.tsx
construction of videoAudio (the videoAudio constant / item with id
'original-video-audio') to pass a raw filesystem path (or normalize with
fromFileUrl(videoPath) before assigning to audioPath) so generateWaveform() /
toFileUrl() receives a plain path; ensure audioPath contains the unencoded
source path expected by generateWaveform() rather than the existing file:// URL.
| setMasterAudioMuted(normalizedEditor.masterAudioMuted); | ||
| setMasterAudioSoloed(normalizedEditor.masterAudioSoloed); | ||
| setMasterAudioVolume(normalizedEditor.masterAudioVolume); | ||
| setAudioTrackVolume(normalizedEditor.audioTrackVolume); | ||
| setIsMasterSelected(normalizedEditor.isMasterSelected ?? false); |
There was a problem hiding this comment.
These master-audio fields load, but they still don't save.
This now reads masterAudioMuted, masterAudioSoloed, masterAudioVolume, and audioTrackVolume back from normalizedEditor, but the persisted editor snapshot still omits them. Saving and reopening a project will reset those controls, and hasUnsavedChanges will stay false when only those values change.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/video-editor/VideoEditor.tsx` around lines 1162 - 1166, The
master-audio state values (masterAudioMuted, masterAudioSoloed,
masterAudioVolume, audioTrackVolume, and isMasterSelected) are being read into
React state (via setMasterAudioMuted, setMasterAudioSoloed,
setMasterAudioVolume, setAudioTrackVolume, setIsMasterSelected) but are not
included in the editor snapshot/serialization or in the change-detection logic,
so they don't persist and don't mark hasUnsavedChanges. Update the editor
serialization and snapshot/save path (the function that builds the persisted
editor object and the routine that compares currentEditor vs savedEditor for
unsaved changes) to include those exact keys (masterAudioMuted,
masterAudioSoloed, masterAudioVolume, audioTrackVolume, isMasterSelected), and
ensure any saveEditor/saveSnapshot method writes them out and the diff/equality
check considers them when setting hasUnsavedChanges.
| useEffect(() => { | ||
| const unlistenProgress = window.electronAPI.onAutoCaptionProgress((payload: { progress: number }) => { | ||
| setAutoCaptionProgress(payload.progress); | ||
| }); | ||
| const unlistenChunk = window.electronAPI.onAutoCaptionChunk(({ cues }) => { | ||
| setAutoCaptions((prev) => { | ||
| const existingIds = new Set(prev.map((c) => c.id)); | ||
| const newCues = cues.filter((c) => !existingIds.has(c.id)); | ||
| if (newCues.length === 0) return prev; | ||
| return [...prev, ...newCues].sort((a, b) => a.startMs - b.startMs); | ||
| }); | ||
| }); | ||
| return () => { | ||
| unlistenProgress(); | ||
| unlistenChunk(); | ||
| }; | ||
| }, []); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify that the preload bridge exposes the renderer APIs used here.
rg -n -C2 'contextBridge\.exposeInMainWorld|generateAutoCaptions|onAutoCaptionProgress|onAutoCaptionChunk' electron/preload.tsRepository: webadderall/Recordly
Length of output: 454
🏁 Script executed:
#!/bin/bash
# Extract the entire electronAPI object definition to see all exposed methods
sed -n '/contextBridge\.exposeInMainWorld("electronAPI"/,/^}/p' electron/preload.ts | head -100Repository: webadderall/Recordly
Length of output: 3617
🏁 Script executed:
#!/bin/bash
# Search for onAutoCaptionProgress and onAutoCaptionChunk specifically
rg -n 'onAutoCaptionProgress|onAutoCaptionChunk' electron/preload.tsRepository: webadderall/Recordly
Length of output: 46
Bridge the new auto-caption listeners through preload.ts.
electron/preload.ts does not expose onAutoCaptionProgress or onAutoCaptionChunk. Calling them on lines 1477 and 1480 will throw on first effect run, preventing incremental caption updates from initializing.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/video-editor/VideoEditor.tsx` around lines 1475 - 1491, The
VideoEditor useEffect calls window.electronAPI.onAutoCaptionProgress and
onAutoCaptionChunk but preload.ts doesn't expose those APIs, causing a runtime
throw; update electron/preload.ts to expose two functions named
onAutoCaptionProgress and onAutoCaptionChunk via contextBridge.exposeInMainWorld
(on the electronAPI object) that wire to ipcRenderer.on for the corresponding
channel names and return an unsubscribe function that calls
ipcRenderer.removeListener, and ensure their TypeScript signatures match the
payload shape used in VideoEditor (payload: { progress: number } and ({ cues })
respectively) so the listeners from VideoEditor.tsx can be registered without
error.
| useEffect(() => { | ||
| void (async () => { | ||
| const result = await window.electronAPI.getWhisperSmallModelStatus(); | ||
| const result = await window.electronAPI.getWhisperModelStatus( | ||
| autoCaptionSettings.selectedModel, | ||
| ); | ||
| if (!result.success) { | ||
| return; | ||
| } | ||
|
|
||
| if (result.exists && result.path) { | ||
| setDownloadedWhisperModelPath(result.path); | ||
| setWhisperModelPath((currentPath) => currentPath ?? result.path ?? null); | ||
| setWhisperModelDownloadStatus("downloaded"); | ||
| setWhisperModelDownloadProgress(100); | ||
| return; | ||
| } else { | ||
| setWhisperModelDownloadStatus("idle"); | ||
| setWhisperModelDownloadProgress(0); | ||
| } | ||
|
|
||
| setDownloadedWhisperModelPath(null); | ||
| setWhisperModelDownloadStatus("idle"); | ||
| setWhisperModelDownloadProgress(0); | ||
| })(); | ||
|
|
||
| return () => unsubscribe?.(); | ||
| }, []); | ||
| }, [autoCaptionSettings.selectedModel]); |
There was a problem hiding this comment.
Clear stale whisperModelPath when selectedModel changes.
Lines 1502-1508 keep the previous non-null path via currentPath ?? result.path, and the miss path never clears it. Switching models can therefore leave generation pointed at the old file while the UI says the new model is selected or idle.
Suggested fix
if (result.exists && result.path) {
- setWhisperModelPath((currentPath) => currentPath ?? result.path ?? null);
+ setWhisperModelPath(result.path);
setWhisperModelDownloadStatus("downloaded");
setWhisperModelDownloadProgress(100);
} else {
+ setWhisperModelPath(null);
setWhisperModelDownloadStatus("idle");
setWhisperModelDownloadProgress(0);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| useEffect(() => { | |
| void (async () => { | |
| const result = await window.electronAPI.getWhisperSmallModelStatus(); | |
| const result = await window.electronAPI.getWhisperModelStatus( | |
| autoCaptionSettings.selectedModel, | |
| ); | |
| if (!result.success) { | |
| return; | |
| } | |
| if (result.exists && result.path) { | |
| setDownloadedWhisperModelPath(result.path); | |
| setWhisperModelPath((currentPath) => currentPath ?? result.path ?? null); | |
| setWhisperModelDownloadStatus("downloaded"); | |
| setWhisperModelDownloadProgress(100); | |
| return; | |
| } else { | |
| setWhisperModelDownloadStatus("idle"); | |
| setWhisperModelDownloadProgress(0); | |
| } | |
| setDownloadedWhisperModelPath(null); | |
| setWhisperModelDownloadStatus("idle"); | |
| setWhisperModelDownloadProgress(0); | |
| })(); | |
| return () => unsubscribe?.(); | |
| }, []); | |
| }, [autoCaptionSettings.selectedModel]); | |
| useEffect(() => { | |
| void (async () => { | |
| const result = await window.electronAPI.getWhisperModelStatus( | |
| autoCaptionSettings.selectedModel, | |
| ); | |
| if (!result.success) { | |
| return; | |
| } | |
| if (result.exists && result.path) { | |
| setWhisperModelPath(result.path); | |
| setWhisperModelDownloadStatus("downloaded"); | |
| setWhisperModelDownloadProgress(100); | |
| } else { | |
| setWhisperModelPath(null); | |
| setWhisperModelDownloadStatus("idle"); | |
| setWhisperModelDownloadProgress(0); | |
| } | |
| })(); | |
| }, [autoCaptionSettings.selectedModel]); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/video-editor/VideoEditor.tsx` around lines 1493 - 1511, When
autoCaptionSettings.selectedModel changes, the effect uses
setWhisperModelPath((currentPath) => currentPath ?? result.path ?? null) which
preserves a previous non-null whisperModelPath and can point generation at the
wrong file; change the logic so that after calling
window.electronAPI.getWhisperModelStatus(autoCaptionSettings.selectedModel) you
explicitly clear the path when result.exists is false or result.path is missing
by calling setWhisperModelPath(null), and when result.exists && result.path set
the path to result.path and update setWhisperModelDownloadStatus and
setWhisperModelDownloadProgress as before; reference setWhisperModelPath,
getWhisperModelStatus, autoCaptionSettings.selectedModel,
setWhisperModelDownloadStatus, and setWhisperModelDownloadProgress.
| } else if (type === "blur") { | ||
| updatedRegion.content = ""; | ||
| if (updatedRegion.blurIntensity === undefined) { | ||
| updatedRegion.blurIntensity = DEFAULT_BLUR_INTENSITY; | ||
| } |
There was a problem hiding this comment.
blur is still missing preview/export/UI support.
These lines make blur annotations selectable, but the provided AnnotationOverlay.tsx, annotationRenderer.ts, and AnnotationSettingsPanel.tsx snippets still only handle text, image, and figure. Right now a blur annotation renders as nothing in the editor and disappears in exports.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/video-editor/VideoEditor.tsx` around lines 2484 - 2488, Blur
annotations are currently selectable (VideoEditor.tsx sets
updatedRegion.blurIntensity) but aren’t rendered or exported because the
codebase only handles text/image/figure; add support for "blur" across the three
places: update annotationRenderer (handle type === "blur" to render a blurred
rectangle/mask over the region using CSS filter: blur(...) or canvas blurring so
it appears in the editor and exports), add UI controls in
AnnotationSettingsPanel.tsx to edit blurIntensity (e.g., slider/input bound to
region.blurIntensity) and any size/shape settings, and update
AnnotationOverlay.tsx to draw the blur preview in the overlay layer (matching
region position/shape and current blurIntensity) and ensure the
export/serialization path includes type: "blur" and blurIntensity so blurred
regions persist in exports.
| audio = new Audio(); | ||
| audio.preload = "auto"; | ||
| // Ensure cross-origin is handled to avoid CORS issues with AudioContext | ||
| audio.crossOrigin = "anonymous"; | ||
| existing.set(region.id, audio); | ||
| } | ||
| const expectedSrc = toFileUrl(region.audioPath); | ||
| if (audio.src !== expectedSrc) { | ||
| audio.src = expectedSrc; | ||
| } | ||
| audio.volume = Math.max(0, Math.min(1, region.volume * previewVolume)); | ||
|
|
||
| // Route through Web Audio API if ready | ||
| if (hasAudioContext && !audioRegionNodesRef.current.has(region.id)) { | ||
| try { | ||
| const source = audioContextRef.current!.createMediaElementSource(audio); | ||
| const gain = audioContextRef.current!.createGain(); | ||
| source.connect(gain); | ||
| gain.connect(masterGainRef.current!); | ||
| audioRegionNodesRef.current.set(region.id, { source, gain }); | ||
| console.log(`[VideoEditor] Audio region ${region.id} routed through GainNode`); | ||
| } catch (e) { | ||
| console.warn(`[VideoEditor] Failed to route audio ${region.id}:`, e); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd src/components/video-editor && wc -l VideoEditor.tsxRepository: webadderall/Recordly
Length of output: 84
🏁 Script executed:
cd src/components/video-editor && sed -n '2640,2690p' VideoEditor.tsxRepository: webadderall/Recordly
Length of output: 2015
🏁 Script executed:
# Find the toFileUrl function to understand what URLs it generates
rg 'toFileUrl' --type ts --type tsx -B 2 -A 5Repository: webadderall/Recordly
Length of output: 91
🏁 Script executed:
# Check if there are other references to crossOrigin in the codebase
rg 'crossOrigin' --type ts --type tsxRepository: webadderall/Recordly
Length of output: 91
🏁 Script executed:
rg 'function toFileUrl|const toFileUrl|export.*toFileUrl' -A 5Repository: webadderall/Recordly
Length of output: 610
🏁 Script executed:
rg 'toFileUrl' -B 2 -A 3Repository: webadderall/Recordly
Length of output: 13105
🏁 Script executed:
# Search for file URL handling in the codebase
rg 'file://' -B 2 -A 2Repository: webadderall/Recordly
Length of output: 7129
🌐 Web query:
file:// URL crossOrigin anonymous Chromium Electron CORS
💡 Result:
In Chromium (and thus Electron), pages loaded from file:// URLs have a null origin. Cross-origin requests from a null origin, such as fetch to http(s):// URLs, are blocked by the Same Origin Policy (SOP) with the error "Cross origin requests are only supported for protocol schemes: http, https, data, chrome, chrome-extension" because CORS negotiation requires HTTP/HTTPS protocols and a proper origin. The crossorigin="anonymous" attribute on elements like <script> or fetch triggers CORS checks (for error reporting or credentials), which fail for file:// origins since no Origin header is sent and servers cannot respond with matching Access-Control-Allow-Origin. This behavior is by design for security: file:// pages should not access remote resources without explicit flags. In Electron BrowserWindow devtools Network tab, such blocked requests may not appear or show CORS errors. Workarounds (development only, insecure for production): - Set webPreferences: { webSecurity: false } (disables SOP/CORS but has security risks; note: since Electron 9, may need app.commandLine.appendSwitch('disable-features', 'OutOfBlinkCors') for full effect). - Use a local HTTP server (e.g., via Electron's protocol handler or external like http-server) instead of file://. - Intercept responses with session.webRequest.onHeadersReceived to inject CORS headers like Access-Control-Allow-Origin: *. - For custom protocols, register with privileges: { corsEnabled: true }. Official Electron docs recommend avoiding webSecurity: false in production and using preload scripts/IPC for secure renderer-main communication. No direct support for crossorigin="anonymous" with file://. Sources confirm file:// lacks proper origin (null), blocking CORS-enabled requests; Electron inherits Chromium behavior; DevTools may not show all blocked requests.
Citations:
- 1: [Bug]: allowRendererProcessReuse changes CORS behavior for file:// URLs on second navigation electron/electron#28641
- 2: https://pratikpc.medium.com/bypassing-cors-with-electron-ab7eaf331605
- 3: https://stackoverflow.com/questions/41069330/with-script-crossorigin-anonymous-why-is-a-script-blocked-by-cors-policy
- 4: https://stackoverflow.com/questions/33338767/cors-request-from-a-file-url-for-a-http-url
- 5: Electron 9.0.0 webSecurity option no longer disables CORS electron/electron#23664
- 6: https://www.chromium.org/Home/loading/oor-cors/
- 7: https://chromium.googlesource.com/chromium/src/+/HEAD/android_webview/docs/cors-and-webview-api.md
- 8: https://stackoverflow.com/questions/10752055/cross-origin-requests-are-only-supported-for-http-error-when-loading-a-local
- 9: https://www.descope.com/blog/post/cors-errors
- 10: https://stackoverflow.com/questions/44205831/using-electron-to-access-cross-origin-resources
- 11: No 'Access-Control-Allow-Origin' error on for origin 'file://' electron/electron#1990
- 12: https://stackoverflow.com/questions/79797517/why-does-cors-apply-to-local-files-and-is-there-a-way-to-just-disable-it
- 13: https://www.electronjs.org/docs/latest/api/structures/web-preferences
- 14: https://www.electronjs.org
- 15: https://chromium.googlesource.com/chromium/src/%2B/02e19392c6c70a69a6dcd36365a26bbc1c167113/services/network/cors/cors_url_loader.cc
- 16: https://chromium.googlesource.com/chromium/blink/+/HEAD/Source/core/fetch/CrossOriginAccessControl.cpp?autodive=0%2F%2F%2F%2F%2F%2F
- 17: https://chromium.googlesource.com/chromium/src/+/refs/heads/main/services/network/public/cpp/cors/cors.cc
- 18: https://chromium.googlesource.com/chromium/src/+/67e6fceed93e33161aa6c63af183424594ae688f/services/network/cross_origin_read_blocking.cc
- 19: [Bug]: CORS on private network ( fetch to localhost url fails due to CORS ) electron/electron#36445
- 20: https://stackoverflow.com/questions/79352732/do-electron-apps-enforce-cors-restrictions-in-the-renderer-process
- 21: Insecure CORS Always Enabled for Buffer Protocols electron/electron#21676
🌐 Web query:
Web Audio API createMediaElementSource file:// URL CORS issues
💡 Result:
Using Web Audio API's createMediaElementSource with file:// URLs for local files encounters security restrictions equivalent to CORS issues. Browsers treat all file:// resources as cross-origin from the HTML page loaded via file://, even if in the same directory. This causes createMediaElementSource to output silence (zeroes), with console warnings like "The HTMLMediaElement passed to createMediaElementSource has a cross-origin resource, the node will output silence". Key evidence: - Local files via file:// are considered different origins, triggering Web Audio API's CORS-like restrictions on MediaElementAudioSourceNode (results 1,7). - Firefox, Safari, and Chrome exhibit this; works inconsistently or not at all offline without server (result 7: "local media files (i.e. file:// URLs) are always treated as being in a different domain"). - MDN docs confirm createMediaElementSource requires compliant HTMLMediaElement, subject to CORS (result 8), but file:// inherently fails this. Workarounds: 1. Serve files via local HTTP server (e.g., Python SimpleHTTPServer) so URLs are http://localhost/ – same-origin, no CORS issues. 2. For user-selected files: Use , create blob: URL (URL.createObjectURL(file)), load into element, then createMediaElementSource – blob: URLs are same-origin (results 6,10,17). 3. Do not set src to direct file:// path in ; crossorigin="anonymous" ineffective for file://. No changes noted as of 2026; remains browser security policy.
Citations:
- 1: https://stackoverflow.com/questions/59016637/the-htmlmediaelement-passed-to-createmediaelementsource-has-a-cross-origin-resou
- 2: https://stackoverflow.com/questions/31083704/mediaelementaudiosource-outputs-zeroes-due-to-cors-access-restrictions
- 3: A way to a) detect if
MediaElementAudioSourceNodeis CORS-restricted & b) revertcreateMediaElementSourceWebAudio/web-audio-api#2453 - 4: https://stackoverflow.com/questions/47614878/createmediaelementsource-from-an-audio-file-input
- 5: https://stackoverflow.com/questions/28383281/web-audio-api-createmediaelementsource-breaks-the-audio-tag
- 6: https://developer.mozilla.org/en-US/docs/Web/API/AudioContext/createMediaElementSource
- 7: https://stackoverflow.com/questions/39187924/html5-audio-createmediaelementsource-breaks-audio-output
- 8: https://stackoverflow.com/questions/20180550/firefox-webaudio-createmediaelementsource-not-working
- 9: https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/CORS
- 10: https://stackoverflow.com/questions/14074833/using-local-file-for-web-audio-api-in-javascript
🌐 Web query:
Electron file protocol crossOrigin="anonymous" audio element
💡 Result:
In Electron apps using the file:// protocol (e.g., loading local HTML files), loading local audio files in an <audio> element via file:/// paths often encounters CORS restrictions, especially in cross-origin contexts or with certain security settings. The crossOrigin="anonymous" attribute on the <audio> element triggers a CORS check, which fails for file:// resources because they lack proper CORS headers like Access-Control-Allow-Origin. Key findings: - Direct file:// audio src works in some cases with webSecurity: false in BrowserWindow webPreferences, but this disables same-origin policy (SOP) and is discouraged for security reasons (exposes app to risks like arbitrary file access). - Official recommendation: Avoid file:// protocol entirely. Instead, register a custom protocol (e.g., 'app://' or 'media://') using protocol.registerSchemesAsPrivileged before app.ready, then handle it with protocol.handle to serve local files via net.fetch(pathToFileURL(localPath)). - For audio streaming, include 'stream: true' in privileges. - Custom protocols can be configured with 'corsEnabled: true' if needed, but for local files, proper privileges (secure: true, supportFetchAPI: true, stream: true) bypass issues without disabling webSecurity. - crossOrigin="anonymous" is unnecessary and problematic with file:// or custom local protocols; omit it or use custom protocol to avoid CORS entirely. Example code (main process): const { app, protocol, net } = require('electron'); const { pathToFileURL } = require('node:url'); const path = require('node:path'); app.whenReady.then( => { protocol.registerSchemesAsPrivileged([ { scheme: 'media', privileges: { secure: true, standard: true, supportFetchAPI: true, stream: true // Required for <audio>/<video> } } ]); protocol.handle('media', (request) => { const filePath = request.url.slice('media://'.length); return net.fetch(pathToFileURL(path.normalize(filePath)).toString); }); // Create window... }); In renderer: <audio src="media:/path/to/audio.mp3"></audio> (no crossOrigin needed). This works with webSecurity: true (default) and is secure for local files.
Citations:
- 1: https://www.electronjs.org/docs/api/protocol/
- 2: Electron should be able to load local resources with enabled webSecurity electron/electron#23393
- 3: [Bug]: protocol.handle() is not working with file protocol electron/electron#39031
- 4: https://www.electronjs.org/docs/latest/tutorial/security
- 5: https://stackoverflow.com/questions/57518133/play-local-audio-files-in-my-app-without-using-the-input-type-file
Remove crossOrigin="anonymous" from file:// audio elements.
Local audio files loaded via toFileUrl() produce file:// URLs. Setting crossOrigin="anonymous" on these elements triggers CORS restrictions in Chromium/Electron, causing createMediaElementSource() to output silence. This breaks Web Audio routing for solo, fades, and gains >100%, forcing fallback to native audio.volume (capped at 100%).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/video-editor/VideoEditor.tsx` around lines 2652 - 2673, The
audio elements are being forced with audio.crossOrigin = "anonymous", which
breaks createMediaElementSource() for file:// URLs from toFileUrl(); update the
logic in the block that creates/initializes the Audio (the audio variable used
with toFileUrl, existing.set, and later routed via createMediaElementSource and
audioRegionNodesRef) to only set crossOrigin when the src is not a file:// URL
(e.g., check expectedSrc.startsWith('file://') and skip setting crossOrigin for
file URLs) or remove the unconditional assignment entirely so local file
playback can be routed through Web Audio API.
| volume={(() => { | ||
| const hasGlobalSolo = masterAudioSoloed || audioRegions.some((r) => r.soloed); | ||
| if (masterAudioMuted || (hasGlobalSolo && !masterAudioSoloed)) { | ||
| return 0; | ||
| } | ||
| return previewVolume * masterAudioVolume; | ||
| })()} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
wc -l src/components/video-editor/VideoEditor.tsxRepository: webadderall/Recordly
Length of output: 112
🏁 Script executed:
sed -n '1842,1846p' src/components/video-editor/VideoEditor.tsxRepository: webadderall/Recordly
Length of output: 307
🏁 Script executed:
sed -n '1885,1888p' src/components/video-editor/VideoEditor.tsxRepository: webadderall/Recordly
Length of output: 208
🏁 Script executed:
sed -n '3590,3596p' src/components/video-editor/VideoEditor.tsxRepository: webadderall/Recordly
Length of output: 374
🏁 Script executed:
sed -n '3580,3610p' src/components/video-editor/VideoEditor.tsxRepository: webadderall/Recordly
Length of output: 1220
🏁 Script executed:
sed -n '1875,1895p' src/components/video-editor/VideoEditor.tsxRepository: webadderall/Recordly
Length of output: 576
🏁 Script executed:
rg "audioRegions.*volume|audio\.volume\s*=\s*1" src/components/video-editor/VideoEditor.tsx -A 2 -B 2Repository: webadderall/Recordly
Length of output: 244
🏁 Script executed:
grep -n "videoAudioNodeRef" src/components/video-editor/VideoEditor.tsx | head -20Repository: webadderall/Recordly
Length of output: 631
🏁 Script executed:
sed -n '1890,1910p' src/components/video-editor/VideoEditor.tsxRepository: webadderall/Recordly
Length of output: 718
🏁 Script executed:
sed -n '1893,1905p' src/components/video-editor/VideoEditor.tsxRepository: webadderall/Recordly
Length of output: 544
Original video track volume is double-scaled: DOM property and web audio gain are both applied.
The video element's audio is routed through createMediaElementSource() (lines 1842-1846) and scaled via masterGainRef (lines 1885-1888) using masterAudioVolume * previewVolume. Additionally, the DOM video.volume property is set to the same multiplier (lines 3590-3596), creating dual volume application.
Audio regions follow the correct pattern: they set audio.volume = 1 and use web audio gain nodes exclusively. The original video track should do the same—either remove the DOM volume property scaling or remove it from the gain node, not both.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/video-editor/VideoEditor.tsx` around lines 3590 - 3596, The
video element is being double-scaled because its DOM volume is set using
previewVolume * masterAudioVolume while the same multiplier is applied via
masterGainRef (created from createMediaElementSource); change the DOM volume to
be 1 (or 0 when muted/solo dictates) and let the WebAudio gain node handle the
scaling. Concretely, update the volume prop logic (currently using previewVolume
* masterAudioVolume) to use 1 when audible and 0 when muted/soloed (e.g.
volume={masterAudioMuted || (hasGlobalSolo && !masterAudioSoloed) ? 0 : 1}),
leaving masterGainRef to apply masterAudioVolume * previewVolume; reference
masterGainRef, createMediaElementSource, masterAudioVolume, previewVolume,
masterAudioMuted, masterAudioSoloed and the audioRegions solo check to locate
and fix the code.
…e video durations
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
src/components/video-editor/timeline/Row.tsx (1)
44-44: Nit: Extra blank line at end of file.Consider removing the trailing blank line for consistency with common style conventions (single newline at EOF).
Suggested fix
} -🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/components/video-editor/timeline/Row.tsx` at line 44, Remove the extra trailing blank line at the end of src/components/video-editor/timeline/Row.tsx so the file ends with a single newline (ensure EOF contains exactly one newline). Edit the Row component file (Row / default export Row) and delete the extra blank line after the final closing token to match project style conventions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/components/video-editor/editorPreferences.ts`:
- Line 50: Change the whisperSelectedModel field from string to the WhisperModel
union and validate/normalize stored values against that union instead of just
checking typeof string: update the property declaration whisperSelectedModel to
WhisperModel, modify the normalization logic (the block around lines 241–244
that checks raw.whisperSelectedModel) to verify the value is one of the allowed
WhisperModel values ("tiny"|"base"|"small"|"medium"|"large"|"custom") and fall
back to a safe default (e.g., "base" or the existing default) when it is not,
and ensure AutoCaptionSettings.selectedModel consumers (e.g., in VideoEditor.tsx
and SettingsPanel.tsx) no longer need as any casts because types now match. Also
ensure any read/write to storage uses the constrained type so invalid strings
are rejected or replaced during load.
In `@src/components/video-editor/projectPersistence.ts`:
- Around line 492-495: Persisted selectedModel is used verbatim and may be
invalid; validate it against the WhisperModel allowed values and fall back to
the default like normalizeAutoCaptionAnimation does. In the normalization where
rawAutoCaptionSettings.selectedModel is read, check that its value is one of the
WhisperModel options ("tiny","base","small","medium","large","custom") (or reuse
any existing helper that validates models), and if not, set selectedModel to
DEFAULT_AUTO_CAPTION_SETTINGS.selectedModel; update the code paths that
reference selectedModel to use this validated value (symbols: selectedModel,
rawAutoCaptionSettings, DEFAULT_AUTO_CAPTION_SETTINGS, WhisperModel,
normalizeAutoCaptionAnimation).
---
Nitpick comments:
In `@src/components/video-editor/timeline/Row.tsx`:
- Line 44: Remove the extra trailing blank line at the end of
src/components/video-editor/timeline/Row.tsx so the file ends with a single
newline (ensure EOF contains exactly one newline). Edit the Row component file
(Row / default export Row) and delete the extra blank line after the final
closing token to match project style conventions.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 3c24d430-bc09-48ae-abec-4509ff16d34a
📒 Files selected for processing (4)
src/components/video-editor/editorPreferences.tssrc/components/video-editor/projectPersistence.tssrc/components/video-editor/timeline/Row.tsxsrc/components/video-editor/timeline/TimelineWrapper.tsx
| customWallpapers: string[]; | ||
| whisperExecutablePath: string | null; | ||
| whisperModelPath: string | null; | ||
| whisperSelectedModel: string; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify all model-typing/normalization call sites in the editor module.
rg -n --type=ts --type=tsx -C2 \
'whisperSelectedModel:\s*string|typeof .*selectedModel === "string"|selectedModel:\s*\(.*as any\)'Repository: webadderall/Recordly
Length of output: 91
🏁 Script executed:
# First, explore the file structure
find . -type f -name "editorPreferences.ts" 2>/dev/nullRepository: webadderall/Recordly
Length of output: 114
🏁 Script executed:
# Search for WhisperModel type definition
rg -n "type WhisperModel|interface WhisperModel" --type ts 2>/dev/null || rg -n "type WhisperModel|interface WhisperModel"Repository: webadderall/Recordly
Length of output: 273
🏁 Script executed:
# Try the corrected rg command without tsx flag
rg -n "whisperSelectedModel:\s*string|typeof .*selectedModel === \"string\"" 2>/dev/null || trueRepository: webadderall/Recordly
Length of output: 260
🏁 Script executed:
# Read editorPreferences.ts to see the context around lines 50 and 241-244
cat -n src/components/video-editor/editorPreferences.ts | head -80Repository: webadderall/Recordly
Length of output: 3325
🏁 Script executed:
# See more of editorPreferences.ts, particularly around lines 241-244
cat -n src/components/video-editor/editorPreferences.ts | sed -n '235,250p'Repository: webadderall/Recordly
Length of output: 785
🏁 Script executed:
# Check the imports at the top of editorPreferences.ts
cat -n src/components/video-editor/editorPreferences.ts | head -20Repository: webadderall/Recordly
Length of output: 674
🏁 Script executed:
# Check the usage in VideoEditor.tsx around lines 389-392
cat -n src/components/video-editor/VideoEditor.tsx | sed -n '380,400p'Repository: webadderall/Recordly
Length of output: 1516
🏁 Script executed:
# Check the usage in SettingsPanel.tsx around lines 1472-1473
cat -n src/components/video-editor/SettingsPanel.tsx | sed -n '1465,1480p'Repository: webadderall/Recordly
Length of output: 898
🏁 Script executed:
# Search for all uses of whisperSelectedModel to understand the impact
rg -n "whisperSelectedModel" src/components/video-editor/ | head -20Repository: webadderall/Recordly
Length of output: 823
🏁 Script executed:
# Verify WhisperModel type definition and export
cat -n src/components/video-editor/types.ts | sed -n '270,280p'Repository: webadderall/Recordly
Length of output: 447
🏁 Script executed:
# Search for all assignments to whisperSelectedModel
rg -n "whisperSelectedModel\s*=" src/components/video-editor/Repository: webadderall/Recordly
Length of output: 164
🏁 Script executed:
# Check if there are any other normalization patterns for whisperSelectedModel
rg -n "whisperSelectedModel|selectedModel" src/components/video-editor/projectPersistence.ts | head -15Repository: webadderall/Recordly
Length of output: 250
🏁 Script executed:
# Verify the correct import path for WhisperModel
head -5 src/components/video-editor/types.tsRepository: webadderall/Recordly
Length of output: 240
Constrain whisperSelectedModel to valid WhisperModel type instead of arbitrary strings.
Line 50 and lines 241–244 currently accept any string value. The normalization at lines 241–244 only checks typeof raw.whisperSelectedModel === "string" without validating the actual value, allowing invalid models to persist in storage. This forces downstream code to use as any casts in VideoEditor.tsx:391 and SettingsPanel.tsx:1473 to bypass the type mismatch with AutoCaptionSettings.selectedModel, which correctly expects WhisperModel (the union type "tiny" | "base" | "small" | "medium" | "large" | "custom" defined in types.ts:275).
🔧 Suggested fix
+import type { WhisperModel } from "./types";
import { normalizeProjectEditor, type ProjectEditorState } from "./projectPersistence";
@@
export interface EditorPreferences extends PersistedEditorControls {
@@
- whisperSelectedModel: string;
+ whisperSelectedModel: WhisperModel;
}
@@
+function normalizeWhisperModel(value: unknown, fallback: WhisperModel): WhisperModel {
+ return value === "tiny" ||
+ value === "base" ||
+ value === "small" ||
+ value === "medium" ||
+ value === "large" ||
+ value === "custom"
+ ? value
+ : fallback;
+}
+
export function normalizeEditorPreferences(
@@
- whisperSelectedModel:
- typeof raw.whisperSelectedModel === "string"
- ? raw.whisperSelectedModel
- : fallback.whisperSelectedModel,
+ whisperSelectedModel: normalizeWhisperModel(
+ raw.whisperSelectedModel,
+ fallback.whisperSelectedModel,
+ ),
};
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/video-editor/editorPreferences.ts` at line 50, Change the
whisperSelectedModel field from string to the WhisperModel union and
validate/normalize stored values against that union instead of just checking
typeof string: update the property declaration whisperSelectedModel to
WhisperModel, modify the normalization logic (the block around lines 241–244
that checks raw.whisperSelectedModel) to verify the value is one of the allowed
WhisperModel values ("tiny"|"base"|"small"|"medium"|"large"|"custom") and fall
back to a safe default (e.g., "base" or the existing default) when it is not,
and ensure AutoCaptionSettings.selectedModel consumers (e.g., in VideoEditor.tsx
and SettingsPanel.tsx) no longer need as any casts because types now match. Also
ensure any read/write to storage uses the constrained type so invalid strings
are rejected or replaced during load.
| selectedModel: | ||
| typeof rawAutoCaptionSettings.selectedModel === "string" | ||
| ? rawAutoCaptionSettings.selectedModel | ||
| : DEFAULT_AUTO_CAPTION_SETTINGS.selectedModel, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the specific file and lines mentioned
head -500 src/components/video-editor/projectPersistence.ts | tail -30Repository: webadderall/Recordly
Length of output: 1505
🏁 Script executed:
# Search for WhisperModel type definition
rg "WhisperModel" -A 3 -B 1Repository: webadderall/Recordly
Length of output: 14890
🏁 Script executed:
# Search for AutoCaptionSettings type definition
rg "AutoCaptionSettings" -A 5 -B 1Repository: webadderall/Recordly
Length of output: 27777
🏁 Script executed:
# Look for DEFAULT_AUTO_CAPTION_SETTINGS definition
rg "DEFAULT_AUTO_CAPTION_SETTINGS" -A 5 -B 1Repository: webadderall/Recordly
Length of output: 13811
🏁 Script executed:
# Check the full normalizeAutoCaptionSettings function context
rg "normalizeAutoCaptionSettings" -A 20 -B 5Repository: webadderall/Recordly
Length of output: 46
Add validation for selectedModel against allowed WhisperModel values during project normalization.
Lines 492–495 accept any string from persisted project data without validating against the WhisperModel union type ("tiny", "base", "small", "medium", "large", "custom"). This allows corrupted or malicious project files to hydrate invalid runtime state that bypasses the type system.
A similar pattern already exists in the same file with normalizeAutoCaptionAnimation()—follow that approach to validate the model selection.
🔧 Suggested fix
function normalizeAutoCaptionAnimation(
@@
}
+
+function normalizeWhisperModel(
+ value: unknown,
+ fallback: AutoCaptionSettings["selectedModel"],
+): AutoCaptionSettings["selectedModel"] {
+ return value === "tiny" ||
+ value === "base" ||
+ value === "small" ||
+ value === "medium" ||
+ value === "large" ||
+ value === "custom"
+ ? value
+ : fallback;
+}
@@
- selectedModel:
- typeof rawAutoCaptionSettings.selectedModel === "string"
- ? rawAutoCaptionSettings.selectedModel
- : DEFAULT_AUTO_CAPTION_SETTINGS.selectedModel,
+ selectedModel: normalizeWhisperModel(
+ rawAutoCaptionSettings.selectedModel,
+ DEFAULT_AUTO_CAPTION_SETTINGS.selectedModel,
+ ),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/components/video-editor/projectPersistence.ts` around lines 492 - 495,
Persisted selectedModel is used verbatim and may be invalid; validate it against
the WhisperModel allowed values and fall back to the default like
normalizeAutoCaptionAnimation does. In the normalization where
rawAutoCaptionSettings.selectedModel is read, check that its value is one of the
WhisperModel options ("tiny","base","small","medium","large","custom") (or reuse
any existing helper that validates models), and if not, set selectedModel to
DEFAULT_AUTO_CAPTION_SETTINGS.selectedModel; update the code paths that
reference selectedModel to use this validated value (symbols: selectedModel,
rawAutoCaptionSettings, DEFAULT_AUTO_CAPTION_SETTINGS, WhisperModel,
normalizeAutoCaptionAnimation).
|
abort, become white screen |
Description
Motivation
These changes modernize the audio editing workflow, ensuring that users can monitor a single audio source without manual unsoloing, and clean up technical debt from legacy interaction models.
Type of Change
Related Issue(s)
#120
Screenshots / Video
Testing Guide
Checklist
Summary by CodeRabbit
New Features
Chores