feat: End-to-end pipeline — YouTube URL to deployed software#50
feat: End-to-end pipeline — YouTube URL to deployed software#50groupthinking merged 1 commit intomainfrom
Conversation
- Add /api/pipeline route for full end-to-end pipeline (video analysis → code generation → GitHub repo → Vercel deploy) - Add deployPipeline() action to dashboard store with stage tracking - Add 🚀 Deploy button to dashboard alongside Analyze - Show pipeline results (live URL, GitHub repo, framework) in video cards - Fix deployment_manager import path in video_processing_service - Wire pipeline to backend /api/v1/video-to-software endpoint - Fallback to Gemini-only analysis when no backend available Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the platform's capabilities by integrating a complete end-to-end pipeline that automates the transformation of a YouTube video URL into a deployed software project. It introduces a new API route to manage this complex workflow, provides a user-friendly interface in the dashboard to initiate deployments, and displays comprehensive results directly on video cards, streamlining the entire process from analysis to live application. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a significant new feature: an end-to-end pipeline for processing a YouTube URL to a deployed software project. While the functionality is impressive, several security issues were identified, including a potential Prompt Injection vulnerability in the LLM analysis path, Information Exposure via detailed error messages in the API, and potential XSS vulnerabilities in the dashboard due to unvalidated URLs. Additionally, I have a few suggestions to improve robustness and maintainability, particularly regarding unique ID generation, use of constants, and logging consistency.
| if (hasGeminiKey()) { | ||
| try { | ||
| const startTime = Date.now(); | ||
| const analysis = await analyzeVideoWithGemini(url); |
There was a problem hiding this comment.
The url parameter is passed directly to analyzeVideoWithGemini, which uses it to construct a system prompt for the LLM. This is a potential Prompt Injection vulnerability. An attacker could provide a crafted URL (e.g., containing special characters or instructions) to manipulate the LLM's behavior or output. It is recommended to validate that the url is a legitimate YouTube URL before processing it.
| }, url); | ||
|
|
||
| return NextResponse.json({ | ||
| id: `pipeline_${Date.now().toString(36)}`, |
There was a problem hiding this comment.
Using Date.now().toString(36) does not guarantee a unique ID, as multiple requests could arrive in the same millisecond. This could lead to subtle bugs with data tracking or collisions. It's more robust to use a cryptographically strong random ID generator like crypto.randomUUID(). This same issue is present on line 105.
id: `pipeline_${crypto.randomUUID()}`,| // ── Full end-to-end pipeline: YouTube URL → deployed software ── | ||
| deployPipeline: async (url) => { | ||
| const { addVideo, updateVideo, addActivity } = get(); | ||
| const id = Date.now().toString(); |
There was a problem hiding this comment.
Using Date.now().toString() for the video id is not guaranteed to be unique, especially if a user processes multiple videos in quick succession. This can cause issues in React's rendering and state management, which rely on unique keys for list items. Using crypto.randomUUID() will provide a robustly unique identifier.
| const id = Date.now().toString(); | |
| const id = crypto.randomUUID(); |
| console.error('Pipeline error:', error); | ||
| await publishEvent(EventTypes.PIPELINE_FAILED, { error: String(error) }, videoUrl).catch(() => {}); | ||
| return NextResponse.json( | ||
| { error: 'Pipeline failed', details: String(error) }, |
There was a problem hiding this comment.
Returning detailed error messages (details: String(error)) to the client can expose sensitive internal information such as stack traces, environment details, or internal logic. This information can be leveraged by an attacker to further compromise the system. It is safer to log the error internally and return a generic error message to the user.
return NextResponse.json({ error: 'Pipeline failed' }, { status: 500 });| <div className="mb-3 space-y-2"> | ||
| {video.pipelineResult.live_url && ( | ||
| <a | ||
| href={video.pipelineResult.live_url} |
There was a problem hiding this comment.
The live_url from the pipeline result is rendered directly into the href attribute of an anchor tag. If the URL is not validated to ensure it uses a safe protocol (e.g., https:), an attacker could inject a javascript: URL, leading to Cross-Site Scripting (XSS) when the link is clicked. Ensure the URL is validated before rendering.
| )} | ||
| {video.pipelineResult.github_repo && ( | ||
| <a | ||
| href={video.pipelineResult.github_repo} |
| if (BACKEND_AVAILABLE) { | ||
| try { | ||
| const controller = new AbortController(); | ||
| const timeout = setTimeout(() => controller.abort(), 300_000); // 5 min for full pipeline |
There was a problem hiding this comment.
The timeout duration 300_000 is a magic number. It's better to express it as a calculation to improve readability and maintainability. For even better practice, consider defining it as a named constant at the top of the file (e.g., const PIPELINE_TIMEOUT_MS = 5 * 60 * 1000;).
const timeout = setTimeout(() => controller.abort(), 5 * 60 * 1000); // 5 min for full pipeline| } | ||
| console.warn(`Backend pipeline returned ${response.status}, falling back`); | ||
| } catch (e) { | ||
| console.log('Backend pipeline unavailable:', e); |
There was a problem hiding this comment.
For consistency in logging, it's better to use console.error here to log the failure of the backend pipeline. This aligns with how other errors are logged in this file (e.g., lines 127 and 136) and helps in filtering and prioritizing logs in a production environment.
console.error('Backend pipeline unavailable:', e);| }; | ||
|
|
||
| updateVideo(id, { | ||
| status: result.status === 'success' || result.status === 'complete' ? 'complete' : 'failed', |
There was a problem hiding this comment.
Bug: The frontend store incorrectly maps the new 'partial' API status to 'failed', hiding successful video analysis results and showing an incorrect failure message.
Severity: HIGH
Suggested Fix
Update the status mapping logic in apps/web/src/store/dashboard-store.ts to correctly handle the 'partial' status. For example, you could introduce a new 'partial' state in the frontend or map it to 'complete' to ensure the analysis results are displayed to the user.
Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent.
Verify if this is a real issue. If it is, propose a fix; if not, explain why it's not
valid.
Location: apps/web/src/store/dashboard-store.ts#L311
Potential issue: When the backend is unavailable but Gemini video analysis succeeds, the
API correctly returns a `status: 'partial'`. However, the frontend logic in
`dashboard-store.ts` only recognizes `'success'` or `'complete'` as valid success
states, causing it to incorrectly map the new `'partial'` status to `'failed'`. This
results in the UI displaying a "Failed" status and a misleading error message, while the
successfully generated video analysis data is hidden from the user. This issue occurs in
the supported "Gemini-only" configuration.
Did we get this right? 👍 / 👎 to inform future reviews.
There was a problem hiding this comment.
Pull request overview
This PR wires a full “YouTube URL → analysis → codegen → repo → deploy” path by adding a dedicated Next.js pipeline API route and integrating it into the dashboard UI/state, plus a small backend import-path fix to enable the deployment manager.
Changes:
- Added
POST /api/pipelineNext.js route to call backend/api/v1/video-to-software, with a Gemini analysis-only fallback. - Extended the dashboard Zustand store and UI to trigger the pipeline, track progress stages, and display deployment outputs (live URL/repo/framework).
- Fixed backend
deployment_managerimport path used by the video-to-software service.
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
src/youtube_extension/backend/services/video_processing_service.py |
Fixes deployment manager import path to unblock the backend pipeline. |
apps/web/src/store/dashboard-store.ts |
Adds PipelineResult + deployPipeline() action and stores pipeline output into Video. |
apps/web/src/app/dashboard/page.tsx |
Adds “Deploy” button and renders pipeline results (live URL/repo/framework) on cards. |
apps/web/src/app/api/video/route.ts |
Clarifies that /api/video is analysis-only and points users to /api/pipeline for full pipeline. |
apps/web/src/app/api/pipeline/route.ts |
Implements the new end-to-end pipeline API route with backend-first + Gemini fallback strategies. |
| title: `🚀 Deploying: ${url.length > 40 ? url.substring(0, 37) + '…' : url}`, | ||
| url, | ||
| status: 'processing', | ||
| progress: 5, |
There was a problem hiding this comment.
deployPipeline() creates a video without setting pipelineResult, so page.tsx treats it as a non-pipeline run (it checks pipelineResult !== undefined) and shows the Transcribe/Analyze/Extract stages while the deploy pipeline is running. If you intend pipeline runs to show Generate/Deploy stages immediately, initialize a sentinel value at creation time (e.g., set pipelineResult to null and update the Video type accordingly, or add an explicit mode/pipeline flag).
| progress: 5, | |
| progress: 5, | |
| pipelineResult: { | |
| live_url: null, | |
| github_repo: null, | |
| build_status: 'pending', | |
| }, |
| updateVideo(id, { | ||
| status: result.status === 'success' || result.status === 'complete' ? 'complete' : 'failed', | ||
| progress: 100, | ||
| title: `Deployed: ${url.length > 40 ? url.substring(0, 37) + '…' : url}`, |
There was a problem hiding this comment.
deployPipeline() treats any non-success/complete response as a failure and still sets the title to Deployed: …. However /api/pipeline returns status: 'partial' for the Gemini-only fallback, which will incorrectly mark the video as failed even though analysis succeeded. Consider handling partial explicitly (e.g., set status to complete with a different label, or introduce a partial UI status) and avoid using a Deployed: title unless a deployment URL is present.
| updateVideo(id, { | |
| status: result.status === 'success' || result.status === 'complete' ? 'complete' : 'failed', | |
| progress: 100, | |
| title: `Deployed: ${url.length > 40 ? url.substring(0, 37) + '…' : url}`, | |
| const isSuccessful = | |
| result.status === 'success' || | |
| result.status === 'complete' || | |
| result.status === 'partial'; | |
| const hasDeployment = | |
| !!pipelineResult.live_url || | |
| !!(pipelineResult.deployment?.urls && Object.keys(pipelineResult.deployment.urls).length > 0); | |
| const uiStatus = isSuccessful ? 'complete' : 'failed'; | |
| const titlePrefix = hasDeployment ? 'Deployed' : 'Processed'; | |
| updateVideo(id, { | |
| status: uiStatus, | |
| progress: 100, | |
| title: `${titlePrefix}: ${url.length > 40 ? url.substring(0, 37) + '…' : url}`, |
| topics: string[]; | ||
| }; | ||
| } | ||
| import type { PipelineResult, Video } from '@/store/dashboard-store'; |
There was a problem hiding this comment.
PipelineResult is imported here but never used in this file. If linting is enabled during build (as is typical with Next.js), this can fail CI or add noise; consider removing the unused type import and only importing Video.
| import type { PipelineResult, Video } from '@/store/dashboard-store'; | |
| import type { Video } from '@/store/dashboard-store'; |
| * | ||
| * Strategies: | ||
| * 1. Backend pipeline (FastAPI /api/v1/video-to-software) — full pipeline with agents | ||
| * 2. Gemini analysis + frontend deployment — when no backend is available |
There was a problem hiding this comment.
The comment above says Strategy 2 is "Gemini analysis + frontend deployment" when the backend is unavailable, but the implementation only performs Gemini analysis and explicitly returns live_url: null with code_generation: null and deployment: null. Please update the comment to match the actual behavior (analysis-only) to avoid misleading future maintainers.
| * 2. Gemini analysis + frontend deployment — when no backend is available | |
| * 2. Gemini analysis only (video intelligence, no deployment) — when no backend is available |
| const controller = new AbortController(); | ||
| const timeout = setTimeout(() => controller.abort(), 300_000); // 5 min for full pipeline | ||
|
|
||
| let response: Response; | ||
| try { | ||
| response = await fetch(`${BACKEND_URL}/api/v1/video-to-software`, { |
There was a problem hiding this comment.
This route waits synchronously for the full backend video-to-software pipeline and sets a 5-minute fetch timeout. On Vercel (and similar serverless deployments), the function itself typically has a shorter execution limit (and apps/web/vercel.json doesn’t configure a longer maxDuration), so the request may time out before the backend finishes. Consider making this endpoint asynchronous (enqueue + return 202 + poll), or explicitly configuring/validating the serverless timeout budget and failing fast with a clear message when it’s insufficient.
| export interface PipelineResult { | ||
| live_url: string | null; | ||
| github_repo: string | null; | ||
| build_status: string; | ||
| code_generation: { | ||
| framework: string; | ||
| files_created: string[]; | ||
| entry_point: string; | ||
| } | null; | ||
| deployment: { | ||
| status: string; | ||
| platforms: string[]; | ||
| urls: Record<string, string>; | ||
| } | null; | ||
| } |
There was a problem hiding this comment.
PipelineResult is stored in Zustand state and accessed throughout the UI, but its fields use snake_case (live_url, github_repo, etc.) while the rest of the dashboard store uses camelCase. This introduces inconsistent naming in the in-memory model and forces components to use mixed conventions. Consider mapping the API response to camelCase when building pipelineResult (e.g., liveUrl, githubRepo, buildStatus, codeGeneration, deployment) and keeping API snake_case only at the network boundary.
…51) * feat: Initialize PGLite v17 database data files for the dataconnect project. * feat: enable automatic outline generation for Gemini Code Assist in VS Code settings. * feat: Add NotebookLM integration with a new processor and `analyze_video_with_notebooklm` MCP tool. * feat: Add NotebookLM profile data and an ingestion test. * chore: Update and add generated browser profile files for notebooklm development. * Update `notebooklm_chrome_profile` internal state and add architectural context documentation and video asset. * feat: Add various knowledge prototypes for MCP servers and universal automation, archive numerous scripts and documentation, and update local browser profile data. * chore: Add generated browser profile cache and data for notebooklm. * Update notebooklm Chrome profile preferences, cache, and session data. * feat: Update NotebookLM Chrome profile with new cache, preferences, and service worker data. * feat: Add generated Chrome profile cache and code cache files and update associated profile data. * Update `notebooklm` Chrome profile cache, code cache, GPU cache, and safe browsing data. * chore(deps): bump the npm_and_yarn group across 4 directories with 5 updates Bumps the npm_and_yarn group with 3 updates in the / directory: [ajv](https://github.com/ajv-validator/ajv), [hono](https://github.com/honojs/hono) and [qs](https://github.com/ljharb/qs). Bumps the npm_and_yarn group with 3 updates in the /docs/knowledge_prototypes/mcp-servers/fetch-mcp directory: [@modelcontextprotocol/sdk](https://github.com/modelcontextprotocol/typescript-sdk), [ajv](https://github.com/ajv-validator/ajv) and [hono](https://github.com/honojs/hono). Bumps the npm_and_yarn group with 1 update in the /scripts/archive/software-on-demand directory: [ajv](https://github.com/ajv-validator/ajv). Bumps the npm_and_yarn group with 2 updates in the /scripts/archive/supabase_cleanup directory: [next](https://github.com/vercel/next.js) and [qs](https://github.com/ljharb/qs). Updates `ajv` from 8.17.1 to 8.18.0 - [Release notes](https://github.com/ajv-validator/ajv/releases) - [Commits](ajv-validator/ajv@v8.17.1...v8.18.0) Updates `hono` from 4.11.7 to 4.12.1 - [Release notes](https://github.com/honojs/hono/releases) - [Commits](honojs/hono@v4.11.7...v4.12.1) Updates `qs` from 6.14.1 to 6.15.0 - [Changelog](https://github.com/ljharb/qs/blob/main/CHANGELOG.md) - [Commits](ljharb/qs@v6.14.1...v6.15.0) Updates `@modelcontextprotocol/sdk` from 1.25.2 to 1.26.0 - [Release notes](https://github.com/modelcontextprotocol/typescript-sdk/releases) - [Commits](modelcontextprotocol/typescript-sdk@v1.25.2...v1.26.0) Updates `ajv` from 8.17.1 to 8.18.0 - [Release notes](https://github.com/ajv-validator/ajv/releases) - [Commits](ajv-validator/ajv@v8.17.1...v8.18.0) Updates `hono` from 4.11.5 to 4.12.1 - [Release notes](https://github.com/honojs/hono/releases) - [Commits](honojs/hono@v4.11.7...v4.12.1) Updates `qs` from 6.14.1 to 6.15.0 - [Changelog](https://github.com/ljharb/qs/blob/main/CHANGELOG.md) - [Commits](ljharb/qs@v6.14.1...v6.15.0) Updates `ajv` from 8.17.1 to 8.18.0 - [Release notes](https://github.com/ajv-validator/ajv/releases) - [Commits](ajv-validator/ajv@v8.17.1...v8.18.0) Updates `next` from 15.4.10 to 15.5.10 - [Release notes](https://github.com/vercel/next.js/releases) - [Changelog](https://github.com/vercel/next.js/blob/canary/release.js) - [Commits](vercel/next.js@v15.4.10...v15.5.10) Updates `qs` from 6.14.1 to 6.15.0 - [Changelog](https://github.com/ljharb/qs/blob/main/CHANGELOG.md) - [Commits](ljharb/qs@v6.14.1...v6.15.0) --- updated-dependencies: - dependency-name: ajv dependency-version: 8.18.0 dependency-type: indirect dependency-group: npm_and_yarn - dependency-name: hono dependency-version: 4.12.1 dependency-type: indirect dependency-group: npm_and_yarn - dependency-name: qs dependency-version: 6.15.0 dependency-type: indirect dependency-group: npm_and_yarn - dependency-name: "@modelcontextprotocol/sdk" dependency-version: 1.26.0 dependency-type: direct:production dependency-group: npm_and_yarn - dependency-name: ajv dependency-version: 8.18.0 dependency-type: indirect dependency-group: npm_and_yarn - dependency-name: hono dependency-version: 4.12.1 dependency-type: indirect dependency-group: npm_and_yarn - dependency-name: qs dependency-version: 6.15.0 dependency-type: indirect dependency-group: npm_and_yarn - dependency-name: ajv dependency-version: 8.18.0 dependency-type: direct:production dependency-group: npm_and_yarn - dependency-name: next dependency-version: 15.5.10 dependency-type: direct:production dependency-group: npm_and_yarn - dependency-name: qs dependency-version: 6.15.0 dependency-type: indirect dependency-group: npm_and_yarn ... Signed-off-by: dependabot[bot] <support@github.com> * chore(deps): bump minimatch Bumps the npm_and_yarn group with 1 update in the /scripts/archive/supabase_cleanup directory: [minimatch](https://github.com/isaacs/minimatch). Updates `minimatch` from 3.1.2 to 3.1.4 - [Changelog](https://github.com/isaacs/minimatch/blob/main/changelog.md) - [Commits](isaacs/minimatch@v3.1.2...v3.1.4) --- updated-dependencies: - dependency-name: minimatch dependency-version: 3.1.4 dependency-type: indirect dependency-group: npm_and_yarn ... Signed-off-by: dependabot[bot] <support@github.com> * chore(deps): bump the npm_and_yarn group across 2 directories with 1 update Bumps the npm_and_yarn group with 1 update in the / directory: [hono](https://github.com/honojs/hono). Bumps the npm_and_yarn group with 1 update in the /docs/knowledge_prototypes/mcp-servers/fetch-mcp directory: [hono](https://github.com/honojs/hono). Updates `hono` from 4.12.1 to 4.12.2 - [Release notes](https://github.com/honojs/hono/releases) - [Commits](honojs/hono@v4.12.1...v4.12.2) Updates `hono` from 4.12.1 to 4.12.2 - [Release notes](https://github.com/honojs/hono/releases) - [Commits](honojs/hono@v4.12.1...v4.12.2) --- updated-dependencies: - dependency-name: hono dependency-version: 4.12.2 dependency-type: indirect dependency-group: npm_and_yarn - dependency-name: hono dependency-version: 4.12.2 dependency-type: indirect dependency-group: npm_and_yarn ... Signed-off-by: dependabot[bot] <support@github.com> * feat: enable frontend-only video ingestion pipeline for Vercel deployment The core pipeline previously required the Python backend to be running. When deployed to Vercel (https://v0-uvai.vercel.app/), the backend is unavailable, causing all video analysis to fail immediately. Changes: - /api/video: Falls back to frontend-only pipeline (transcribe + extract) when the Python backend is unreachable, with 15s timeout - /api/transcribe: Adds Gemini fallback when OpenAI is unavailable, plus 8s timeout on backend probe to avoid hanging on Vercel - layout.tsx: Loads Google Fonts via <link> instead of next/font/google to avoid build failures in offline/sandboxed CI environments - page.tsx: Replace example URLs with technical content (3Blue1Brown neural networks, Karpathy LLM intro) instead of rick roll / zoo videos - gemini_service.py: Gate Vertex AI import behind GOOGLE_CLOUD_PROJECT env var to prevent 30s+ hangs on the GCE metadata probe - agent_gap_analyzer.py: Fix f-string backslash syntax errors (Python 3.11) https://claude.ai/code/session_015Pd3a6hinTenCNrPRGiZqE * Potential fix for code scanning alert no. 4518: Server-side request forgery Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> * Initial plan * Potential fix for code scanning alert no. 4517: Server-side request forgery Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> * Initial plan * Fix review feedback: timeout cleanup, transcript_segments shape, ENABLE_VERTEX_AI boolean parsing Co-authored-by: groupthinking <154503486+groupthinking@users.noreply.github.com> * fix: clearTimeout in finally blocks, transcript_segments shape, ENABLE_VERTEX_AI boolean parsing Co-authored-by: groupthinking <154503486+groupthinking@users.noreply.github.com> * Update src/youtube_extension/services/ai/gemini_service.py Co-authored-by: vercel[bot] <35613825+vercel[bot]@users.noreply.github.com> * Update apps/web/src/app/api/video/route.ts Co-authored-by: vercel[bot] <35613825+vercel[bot]@users.noreply.github.com> * Update apps/web/src/app/api/video/route.ts Co-authored-by: vercel[bot] <35613825+vercel[bot]@users.noreply.github.com> * Initial plan * Initial plan * Fix: move clearTimeout into .finally() to prevent timer leaks on fetch abort/error Co-authored-by: groupthinking <154503486+groupthinking@users.noreply.github.com> * Fix clearTimeout not called in finally blocks for AbortController timeouts Co-authored-by: groupthinking <154503486+groupthinking@users.noreply.github.com> * Fix: Relative URLs in server-side fetch calls fail in production - fetch('/api/transcribe') and fetch('/api/extract-events') use relative URLs which don't resolve correctly in server-side Next.js code on production deployments like Vercel. This commit fixes the issue reported at apps/web/src/app/api/video/route.ts:101 ## Bug Analysis **Why it happens:** In Next.js API routes running on the server (Node.js runtime), the `fetch()` API requires absolute URLs. Unlike browsers which have an implicit base URL (the current origin), server-side code has no context for resolving relative URLs like `/api/transcribe`. The Node.js fetch implementation will fail to resolve these relative paths, resulting in TypeError or connection errors. **When it manifests:** - **Development (localhost:3000)**: Works accidentally because the request URL contains the host - **Production (Vercel)**: Fails because the relative URL cannot be resolved to a valid absolute URL without proper host context **What impact it has:** The frontend-only pipeline fallback (Strategy 2) in lines 101-132 is completely broken in production. When the backend is unavailable (common on Vercel), the code attempts to use `/api/transcribe` and `/api/extract-events` serverless functions but fails due to unresolvable relative URLs. This causes the entire video analysis endpoint to fail when the backend is unavailable. ## Fix Explanation **Changes made:** 1. Added a `getBaseUrl(request: Request)` helper function that extracts the absolute base URL from the incoming request object using `new URL(request.url)` 2. Updated line 108: `fetch('/api/transcribe', ...)` → `fetch(`${baseUrl}/api/transcribe`, ...)` 3. Updated line 127: `fetch('/api/extract-events', ...)` → `fetch(`${baseUrl}/api/extract-events`, ...)` **Why it solves the issue:** - The incoming `request` object contains the full URL including protocol and host - By constructing an absolute URL from the request, we ensure the fetch calls work in both development and production - This approach is more reliable than environment variables because it uses the actual request context, handling reverse proxies and different deployment configurations correctly Co-authored-by: Vercel <vercel[bot]@users.noreply.github.com> Co-authored-by: groupthinking <garveyht@gmail.com> * Initial plan * chore(deps): bump the npm_and_yarn group across 1 directory with 1 update Bumps the npm_and_yarn group with 1 update in the /docs/knowledge_prototypes/mcp-servers/fetch-mcp directory: [minimatch](https://github.com/isaacs/minimatch). Updates `minimatch` from 3.1.2 to 3.1.5 - [Changelog](https://github.com/isaacs/minimatch/blob/main/changelog.md) - [Commits](isaacs/minimatch@v3.1.2...v3.1.5) Updates `minimatch` from 5.1.6 to 5.1.9 - [Changelog](https://github.com/isaacs/minimatch/blob/main/changelog.md) - [Commits](isaacs/minimatch@v3.1.2...v3.1.5) --- updated-dependencies: - dependency-name: minimatch dependency-version: 3.1.5 dependency-type: indirect dependency-group: npm_and_yarn - dependency-name: minimatch dependency-version: 5.1.9 dependency-type: indirect dependency-group: npm_and_yarn ... Signed-off-by: dependabot[bot] <support@github.com> * fix: validate BACKEND_URL before using it Skip backend calls entirely when BACKEND_URL is not configured or contains an invalid value (like a literal ${...} template string). This prevents URL parse errors on Vercel where the env var may not be set. https://claude.ai/code/session_015Pd3a6hinTenCNrPRGiZqE * fix: resolve embeddings package build errors (#41) - Create stub types for Firebase Data Connect SDK in src/dataconnect-generated/ - Fix import path from ../dataconnect-generated to ./dataconnect-generated (rootDir constraint) - Add explicit type assertions for JSON responses (predictions, access_token) - All 6 TypeScript errors resolved, clean build verified Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * feat: Gemini SDK upgrade + VideoPack schema alignment (#43) * chore: Update generated Chrome profile cache and session data for notebooklm. * chore: refresh notebooklm Chrome profile data, including Safe Browsing lists, caches, and session files. * Update local application cache and database files within the NotebookLM Chrome profile. * chore: update Chrome profile cache and Safe Browsing data files. * feat: upgrade Gemini to @google/genai SDK with structured output, search grounding, video URL processing, and extend VideoPack schema - Upgrade extract-events/route.ts from @google/generative-ai to @google/genai - Add Gemini responseSchema with Type system for structured output enforcement - Add Google Search grounding (googleSearch tool) to Gemini calls - Upgrade transcribe/route.ts to @google/genai with direct YouTube URL processing via fileData - Add Gemini video URL fallback chain: direct video → text+search → other strategies - Extend VideoPackV0 schema with Chapter, CodeCue, Task models - Update versioning shim for new fields - Export new types from videopack __init__ Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --------- Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * feat: wire CloudEvents pipeline + Chrome Built-in AI fallback (#44) - Add TypeScript CloudEvents publisher (apps/web/src/lib/cloudevents.ts) emitting standardized events at each video processing stage - Wire CloudEvents into /api/video route (both backend + frontend strategies) - Wire CloudEvents into FastAPI backend router (process_video_v1 endpoint) - Add Chrome Built-in AI service (Prompt API + Summarizer API) for on-device client-side transcript analysis when API keys are unavailable - Add useBuiltInAI React hook for component integration - Add .next/ to .gitignore Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * feat: wire A2A inter-agent messaging into orchestrator + API (#45) - Add A2AContextMessage dataclass to AgentOrchestrator for lightweight inter-agent context sharing during parallel task execution - Auto-broadcast agent results to peer agents after parallel execution - Add send_a2a_message() and get_a2a_log() methods to orchestrator - Add POST /api/v1/agents/a2a/send endpoint for frontend-to-agent messaging - Add GET /api/v1/agents/a2a/log endpoint to query message history - Extend frontend agentService with sendA2AMessage() and getA2ALog() Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * feat: add LiteRT-LM setup script and update README (#46) - Add setup.sh to download lit CLI binary and .litertlm model - Support macOS arm64 and x86_64 architectures - Auto-generate .env with LIT_BINARY_PATH and LIT_MODEL_PATH - Add .gitignore for bin/, models/, .env - Update README with Quick Setup section Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * feat: implement Gemini agentic video analysis with Google Search grounding (#47) - Create gemini-video-analyzer.ts: single Gemini call with googleSearch tool for transcript extraction AND event analysis (PK=998 pattern) - Add youtube-metadata.ts: scrapes title, description, chapters from YouTube without API key - Update /api/video: Gemini agentic analysis as primary strategy, transcribe→extract chain as fallback - Fix /api/transcribe: remove broken fileData.fileUri, use Gemini Google Search grounding as primary, add metadata context, filter garbage OpenAI results - Fix /api/extract-events: accept videoUrl without requiring transcript, direct Gemini analysis via Google Search when no transcript available Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: support Vertex_AI_API_KEY as Gemini key fallback Create shared gemini-client.ts that resolves API key from: GEMINI_API_KEY → GOOGLE_API_KEY → Vertex_AI_API_KEY All API routes now use the shared client instead of hardcoding process.env.GEMINI_API_KEY. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: use Vertex AI Express Mode for Vertex_AI_API_KEY When only Vertex_AI_API_KEY is set (no GEMINI_API_KEY), the client now initializes in Vertex AI mode with vertexai: true + apiKey. Uses project uvai-730bb and us-central1 as defaults. Also added GOOGLE_CLOUD_PROJECT env var to Vercel. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: Vertex AI Express Mode compatibility — remove responseSchema+googleSearch conflict (#48) Vertex AI does not support controlled generation (responseSchema) combined with the googleSearch tool. This caused 400 errors on every Gemini call. Changes: - gemini-client.ts: Prioritize Vertex_AI_API_KEY, support GOOGLE_GENAI_USE_VERTEXAI env var - gemini-video-analyzer.ts: Remove responseSchema, enforce JSON via prompt instructions - extract-events/route.ts: Same fix for extractWithGemini and inline Gemini calls - Strip markdown code fences from responses before JSON parsing Tested end-to-end with Vertex AI Express Mode key against multiple YouTube videos. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: restore full PK=998 pattern — responseSchema + googleSearch + gemini-3-pro-preview (#49) The previous fix (PR #48) was a shortcut — it removed responseSchema when the real issue was using gemini-2.5-flash which doesn't support responseSchema + googleSearch together on Vertex AI. gemini-3-pro-preview DOES support the combination. This commit restores the exact PK=998 pattern: - gemini-video-analyzer.ts: Restored responseSchema with Type system, responseMimeType, e22Snippets field, model → gemini-3-pro-preview - extract-events/route.ts: Restored geminiResponseSchema, Type import, responseMimeType, model → gemini-3-pro-preview - transcribe/route.ts: model → gemini-3-pro-preview Tested with Vertex AI Express Mode key on two YouTube videos. Both return structured JSON with events, transcript, actions, codeMapping, cloudService, e22Snippets, architectureCode, ingestScript. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * feat: end-to-end pipeline — YouTube URL to deployed software (#50) - Add /api/pipeline route for full end-to-end pipeline (video analysis → code generation → GitHub repo → Vercel deploy) - Add deployPipeline() action to dashboard store with stage tracking - Add 🚀 Deploy button to dashboard alongside Analyze - Show pipeline results (live URL, GitHub repo, framework) in video cards - Fix deployment_manager import path in video_processing_service - Wire pipeline to backend /api/v1/video-to-software endpoint - Fallback to Gemini-only analysis when no backend available Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: add writable directories to Docker image for deployment pipeline Create /app/generated_projects, /app/youtube_processed_videos, and /tmp/uvai_data directories in Dockerfile to fix permission denied errors in the deployment and video processing pipeline on Railway. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: security hardening, video-specific codegen, API consistency - CORS: replace wildcard/glob with explicit allowed origins in both entry points - Rate limiting: enable 60 req/min with 15 burst on backend - API auth: add optional X-API-Key middleware for pipeline endpoints - Codegen: generate video-specific HTML/CSS/JS from analysis output - API: accept both 'url' and 'video_url' via Pydantic alias - Deploy: fix Vercel REST API payload format (gitSource instead of gitRepository) Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: Vercel deployment returning empty live_url Root causes fixed: - Case mismatch in _poll_deployment_status: compared lowercased status against uppercase success_statuses list, so READY was never matched - Vercel API returns bare domain URLs without https:// prefix; added _ensure_https() to normalize them - Poll requests were missing auth headers, causing 401 failures - _deploy_files_directly fallback returned fake simulated URLs that masked real failures; removed in favor of proper error reporting - _generate_deployment_urls only returned URLs from 'success' status deployments, discarding useful fallback URLs from failed deployments Improvements: - On API failure (permissions, plan limits), return a Vercel import URL the user can click to deploy manually instead of an empty string - Support VERCEL_ORG_ID team scoping on deploy and poll endpoints - Use readyState field (Vercel v13 API) for initial status check - Add 'canceled' to failure status list in poll loop - Poll failures are now non-fatal; initial URL is used as fallback Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: harden slim entry point — CORS, rate limiting, auth, security headers - Add uvaiio.vercel.app to CORS allowed origins - Add slowapi rate limiting (60 req/min) - Add API key auth middleware (optional via EVENTRELAY_API_KEY) - Add security headers (X-Content-Type-Options, X-Frame-Options, X-XSS-Protection) - Fixes production gap where slim main.py had none of the backend/main.py protections Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * fix: resolve Pydantic Config/model_config conflict breaking Railway deploy The VideoToSoftwareRequest model had both 'model_config = ConfigDict(...)' and 'class Config:' which Pydantic v2 rejects. Merged into single model_config. This was causing the v1 router to fail loading, making /api/v1/health return 404. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: vercel[bot] <35613825+vercel[bot]@users.noreply.github.com> Co-authored-by: Vercel <vercel[bot]@users.noreply.github.com> Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Summary
This PR wires the complete end-to-end pipeline that the platform was always designed for:
YouTube URL → Video Analysis → Code Generation → GitHub Repo → Vercel Deploy → Live URL
Changes
/api/pipelineroute — New Next.js endpoint for full pipeline. Calls backend/api/v1/video-to-software, falls back to Gemini-only analysisdeployPipeline()— New store action with pipeline stage progress trackingdeployment_managerimport path invideo_processing_service.pyValidation
npx tsc --noEmit— ✅ zero errorsnpm run build— ✅ builds successfullycurl POST /api/v1/video-to-software→ created GitHub repo + generated codeƒ /api/pipelineArchitecture