Skip to content

feat: Wire CloudEvents pipeline + Chrome Built-in AI fallback#44

Merged
groupthinking merged 1 commit intomainfrom
feat/cloudevents-and-builtinai
Feb 28, 2026
Merged

feat: Wire CloudEvents pipeline + Chrome Built-in AI fallback#44
groupthinking merged 1 commit intomainfrom
feat/cloudevents-and-builtinai

Conversation

@groupthinking
Copy link
Owner

Wire CloudEvents into both Next.js and FastAPI video pipelines. Add Chrome Built-in AI (Prompt API + Summarizer) as client-side fallback.

- Add TypeScript CloudEvents publisher (apps/web/src/lib/cloudevents.ts)
  emitting standardized events at each video processing stage
- Wire CloudEvents into /api/video route (both backend + frontend strategies)
- Wire CloudEvents into FastAPI backend router (process_video_v1 endpoint)
- Add Chrome Built-in AI service (Prompt API + Summarizer API) for
  on-device client-side transcript analysis when API keys are unavailable
- Add useBuiltInAI React hook for component integration
- Add .next/ to .gitignore

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Copilot AI review requested due to automatic review settings February 28, 2026 18:03
@vercel
Copy link
Contributor

vercel bot commented Feb 28, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
v0-uvai Building Building Preview, Comment, Open in v0 Feb 28, 2026 6:03pm

@groupthinking groupthinking merged commit a2075c3 into main Feb 28, 2026
13 of 15 checks passed
@groupthinking groupthinking deleted the feat/cloudevents-and-builtinai branch February 28, 2026 18:03
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly upgrades the video processing system by integrating CloudEvents for comprehensive pipeline observability and introducing a robust client-side fallback using Chrome's Built-in AI. These changes ensure that video analysis processes are more transparent, resilient, and continue to function even when backend services are not fully available, thereby improving the overall reliability and user experience of the application.

Highlights

  • CloudEvents Integration: Implemented CloudEvents publishing across both the Next.js frontend API route and the FastAPI backend video processing pipeline. This provides standardized event emission at various stages of video analysis, enhancing observability and enabling downstream consumers to react to pipeline events.
  • Chrome Built-in AI Fallback: Introduced a client-side fallback mechanism utilizing Chrome's Built-in AI (Prompt API and Summarizer API). This allows for on-device summarization and event extraction when server-side API keys are unavailable, improving resilience and user experience.
  • New Utility Files: Added dedicated utility files for CloudEvents (apps/web/src/lib/cloudevents.ts) and Chrome Built-in AI services (apps/web/src/lib/services/builtin-ai.ts), encapsulating their respective logic and improving code organization.
  • Improved Error Handling and Logging: Enhanced error handling in the video processing pipelines to publish PIPELINE_FAILED CloudEvents, providing better insight into failures.
Changelog
  • .gitignore
    • Added '.next/' to the ignore list to prevent build artifacts from being committed.
  • apps/web/src/app/api/video/route.ts
    • Imported CloudEvents publisher and event types.
    • Captured the video URL early in the request for consistent event subjects.
    • Published VIDEO_RECEIVED event at the start of video processing.
    • Published PIPELINE_COMPLETED event upon successful backend pipeline execution.
    • Published TRANSCRIPT_STARTED, TRANSCRIPT_COMPLETED, EXTRACTION_STARTED, and EXTRACTION_COMPLETED events for the frontend processing strategy.
    • Published PIPELINE_COMPLETED or PIPELINE_FAILED events based on the outcome of the frontend pipeline.
    • Added a PIPELINE_FAILED event publication in the catch block for general video analysis errors.
  • apps/web/src/hooks/use-builtin-ai.ts
    • Added a new React hook useBuiltInAI to expose Chrome Built-in AI capabilities.
    • Implemented state management for checking promptAPI and summarizerAPI availability.
    • Provided functions for summarize and extractEvents using the Built-in AI services.
  • apps/web/src/lib/cloudevents.ts
    • Created a new utility file for CloudEvents v1.0 publishing.
    • Defined the CloudEvent interface and a set of EventTypes for various pipeline stages.
    • Implemented a publishEvent function that sends events to a configured webhook or logs them locally as a fallback.
  • apps/web/src/lib/services/builtin-ai.ts
    • Created a new service file for integrating with Chrome Built-in AI.
    • Included global type declarations for window.ai APIs (LanguageModel and Summarizer).
    • Provided checkCapabilities to determine the availability of Built-in AI features.
    • Implemented summarizeTranscript using the Summarizer API, with a fallback to the Prompt API.
    • Implemented extractEventsLocal using the Prompt API for structured data extraction.
    • Added a low-level promptExtract helper function for Prompt API calls with session management.
  • src/youtube_extension/backend/api/v1/router.py
    • Imported the CloudEvents publisher for backend integration.
    • Initialized an optional CloudEvents publisher, defaulting to a file sink if no backend is configured.
    • Added an _emit_event asynchronous helper function to publish CloudEvents.
    • Published com.eventrelay.video.received event at the start of the backend video processing.
    • Published com.eventrelay.pipeline.completed event upon successful completion of the backend pipeline.
    • Published com.eventrelay.pipeline.failed event when an exception occurs during backend video processing.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds CloudEvents emission across the backend (FastAPI) and frontend (Next.js) video processing pipelines, and introduces a client-side fallback using Chrome Built-in AI (Prompt API + Summarizer) when server-side processing isn’t available.

Changes:

  • Add CloudEvents emission in the FastAPI /api/v1/process-video endpoint.
  • Add a Next.js CloudEvents publisher and emit stage events from /api/video.
  • Introduce Chrome Built-in AI utilities and a React hook to expose summarization/event extraction capabilities.

Reviewed changes

Copilot reviewed 5 out of 6 changed files in this pull request and generated 7 comments.

Show a summary per file
File Description
src/youtube_extension/backend/api/v1/router.py Emits CloudEvents for backend video pipeline lifecycle.
apps/web/src/lib/services/builtin-ai.ts Implements Chrome Built-in AI capability detection + summarization/extraction helpers.
apps/web/src/lib/cloudevents.ts Adds a lightweight CloudEvents v1.0 event constructor and webhook publisher for Next.js.
apps/web/src/hooks/use-builtin-ai.ts Provides a client hook wrapping Built-in AI capability checks and helpers.
apps/web/src/app/api/video/route.ts Emits CloudEvents throughout the Next.js video analysis route (backend + frontend-only strategies).
.gitignore Ignores .next/ build artifacts.

Comment on lines +1 to +9
/**
* CloudEvents v1.0 publisher for the Next.js frontend pipeline.
*
* Emits standardized events at each video processing stage so that
* downstream consumers (Pub/Sub, webhooks, file sink) can react.
*
* When no backend is configured the events are written to a local
* JSONL file (`/tmp/cloudevents.jsonl`) for observability.
*/
Copy link

Copilot AI Feb 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The module docstring/comments claim events are appended to /tmp/cloudevents.jsonl when no backend/webhook is configured, but the implementation only console.logs and never writes a JSONL file. Either implement the file sink behavior (server-side only) or update the comments to match the actual behavior so operators aren’t misled.

Copilot uses AI. Check for mistakes.
Comment on lines +67 to +80
await fetch(webhookUrl, {
method: 'POST',
headers: {
'Content-Type': 'application/cloudevents+json',
},
body: JSON.stringify(event),
});
} catch (e) {
console.warn('[CloudEvents] Webhook publish failed:', e);
}
}

// Always log the event for observability
console.log(`[CloudEvent] ${type}`, JSON.stringify({ id: event.id, subject }));
Copy link

Copilot AI Feb 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fetch(webhookUrl, …) won’t throw on non-2xx responses, so webhook delivery failures (e.g., 400/500) will be silently treated as success. Consider checking response.ok / status and logging a warning (and possibly including the CloudEvent id/type) when the webhook responds with an error.

Suggested change
await fetch(webhookUrl, {
method: 'POST',
headers: {
'Content-Type': 'application/cloudevents+json',
},
body: JSON.stringify(event),
});
} catch (e) {
console.warn('[CloudEvents] Webhook publish failed:', e);
}
}
// Always log the event for observability
console.log(`[CloudEvent] ${type}`, JSON.stringify({ id: event.id, subject }));
const response = await fetch(webhookUrl, {
method: 'POST',
headers: {
'Content-Type': 'application/cloudevents+json',
},
body: JSON.stringify(event),
});
if (!response.ok) {
console.warn(
'[CloudEvents] Webhook responded with error status',
{
status: response.status,
statusText: response.statusText,
webhookUrl,
eventId: event.id,
eventType: event.type,
},
);
}
} catch (e) {
console.warn('[CloudEvents] Webhook publish failed:', e, {
webhookUrl,
eventId: event.id,
eventType: event.type,
});
}
}
// Always log the event for observability
console.log(
`[CloudEvent] ${type}`,
JSON.stringify({ id: event.id, subject }),
);

Copilot uses AI. Check for mistakes.
Comment on lines +12 to +13
/* eslint-disable @typescript-eslint/no-explicit-any */

Copy link

Copilot AI Feb 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/* eslint-disable @typescript-eslint/no-explicit-any */ is applied file-wide, but this file doesn’t currently use any. Removing the disable (or narrowing it to the specific line(s) that need it) will keep linting effective.

Suggested change
/* eslint-disable @typescript-eslint/no-explicit-any */

Copilot uses AI. Check for mistakes.
Comment on lines +65 to +73
if (webhookUrl) {
try {
await fetch(webhookUrl, {
method: 'POST',
headers: {
'Content-Type': 'application/cloudevents+json',
},
body: JSON.stringify(event),
});
Copy link

Copilot AI Feb 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

publishEvent performs an outbound fetch without any timeout/abort signal. In serverless/edge runtimes a slow or hanging webhook can unnecessarily delay the video pipeline (and you call publishEvent multiple times per request). Consider adding an AbortController timeout and/or making webhook publishing explicitly best-effort so it can’t hold the main request open.

Copilot uses AI. Check for mistakes.
Comment on lines +24 to +30
# CloudEvents integration (optional — falls back to file sink)
try:
from youtube_extension.integration.cloudevents_publisher import create_publisher as _create_publisher

_ce_publisher = _create_publisher(backend="file")
except Exception:
_ce_publisher = None
Copy link

Copilot AI Feb 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The CloudEvents publisher import path looks incorrect: youtube_extension.integration doesn’t exist in this repo (there’s src/integration/cloudevents_publisher.py and youtube_extension/integrations/). As written, this will always fall into the except and disable CloudEvents publishing entirely.

Copilot uses AI. Check for mistakes.
Comment on lines +24 to +30
# CloudEvents integration (optional — falls back to file sink)
try:
from youtube_extension.integration.cloudevents_publisher import create_publisher as _create_publisher

_ce_publisher = _create_publisher(backend="file")
except Exception:
_ce_publisher = None
Copy link

Copilot AI Feb 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_create_publisher(backend="file") hard-codes the backend and bypasses create_publisher()’s env-based configuration (CLOUDEVENTS_BACKEND, WEBHOOK_URL, etc.). If the intent is to “wire CloudEvents into the pipeline”, consider letting the backend be selected via env (or defaulting to file only when no backend is configured) so pubsub/http can be enabled without code changes.

Suggested change
# CloudEvents integration (optional — falls back to file sink)
try:
from youtube_extension.integration.cloudevents_publisher import create_publisher as _create_publisher
_ce_publisher = _create_publisher(backend="file")
except Exception:
_ce_publisher = None
# CloudEvents integration (optional — backend selected via environment)
try:
from youtube_extension.integration.cloudevents_publisher import create_publisher as _create_publisher
except ImportError:
_ce_publisher = None
else:
try:
# Let create_publisher() select backend from env (e.g. CLOUDEVENTS_BACKEND)
_ce_publisher = _create_publisher()
except Exception:
# On initialization failure, disable CloudEvents integration
_ce_publisher = None

Copilot uses AI. Check for mistakes.
Comment on lines 576 to 579
except Exception as e:
logger.error(f"Error in video processing: {e}")
await _emit_event("com.eventrelay.pipeline.failed", {"url": request.video_url, "error": str(e)}, request.video_url)
raise HTTPException(status_code=500, detail=str(e))
Copy link

Copilot AI Feb 28, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The failure event publishes error: str(e) and uses the raw request.video_url as subject. Because the CloudEvents publisher can send to external backends (HTTP/webhook/PubSub), this can leak internal error details and arbitrary user-provided URLs. Consider emitting a sanitized error code/message (and/or truncating) and using a normalized identifier (e.g., extracted YouTube video_id) instead of the full URL in event payloads/subject.

Copilot uses AI. Check for mistakes.
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a CloudEvents pipeline for both the Next.js frontend and the FastAPI backend to emit events at various stages of video processing, and adds a client-side fallback mechanism using the experimental Chrome Built-in AI APIs (Prompt and Summarizer). However, two medium-severity security issues were identified: a prompt injection vulnerability in the client-side AI processing and a potential Denial of Service (DoS) risk due to missing timeouts in the event publishing logic. Additionally, areas for improvement include making exception handling more specific, improving type safety in accordance with the project's style guide, and updating comments that are out of sync with the code.

});

try {
return await session.prompt(text);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

The promptExtract function directly passes untrusted transcript text to the Chrome Built-in AI model without any sanitization or framing. An attacker can craft a video transcript that, when processed by the AI, causes it to generate malicious content or bypass intended constraints. This is a classic prompt injection vulnerability in an LLM-based feature. While this is a client-side AI feature, the manipulated output could still impact the user experience or lead to other vulnerabilities like XSS if the output is rendered unsafely in the UI.

Comment on lines +67 to +73
await fetch(webhookUrl, {
method: 'POST',
headers: {
'Content-Type': 'application/cloudevents+json',
},
body: JSON.stringify(event),
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-medium medium

The publishEvent function uses fetch to send events to a webhook URL without a timeout. Since this function is called multiple times and awaited in the main API route (/api/video), a slow or hanging webhook can block the API response, potentially leading to a Denial of Service (DoS) of the serverless function or worker process. Implementing a timeout for the fetch call is recommended to ensure system resilience.

Comment on lines +7 to +8
* When no backend is configured the events are written to a local
* JSONL file (`/tmp/cloudevents.jsonl`) for observability.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This comment, and the one on line 54, states that events are written to /tmp/cloudevents.jsonl. However, the implementation on line 80 only logs events to the console. To avoid confusion for future developers, please update the comments to accurately describe the code's behavior.

Suggested change
* When no backend is configured the events are written to a local
* JSONL file (`/tmp/cloudevents.jsonl`) for observability.
* When no webhook is configured, events are logged to the console for
* observability, which is useful in serverless environments like Vercel.

* - Otherwise → append to /tmp/cloudevents.jsonl (dev/Vercel)
*/
export async function publishEvent(
type: string,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The type parameter is currently a generic string. For better type safety and to prevent passing invalid event types, you can create a specific type from your EventTypes object and use that in the function signature.

First, add this type definition after your EventTypes constant (e.g., on line 49):

export type EventType = (typeof EventTypes)[keyof typeof EventTypes];

Then, you can use it here.

Suggested change
type: string,
type: EventType,

from youtube_extension.integration.cloudevents_publisher import create_publisher as _create_publisher

_ce_publisher = _create_publisher(backend="file")
except Exception:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Catching a broad Exception for an import can hide unrelated issues during application startup. It's better to catch the more specific ImportError that would be raised if the cloudevents_publisher module is not found.

Suggested change
except Exception:
except ImportError:

logger = logging.getLogger(__name__)


async def _emit_event(event_type: str, data: dict, subject: str | None = None) -> None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The repository's style guide requires 'strict type hinting'. The type hint for the data parameter is dict, which is not very specific. Using dict[str, Any] would be stricter and more descriptive, indicating a dictionary with string keys.

Suggested change
async def _emit_event(event_type: str, data: dict, subject: str | None = None) -> None:
async def _emit_event(event_type: str, data: dict[str, Any], subject: str | None = None) -> None:
References
  1. All functions must have strict type hinting. (link)

groupthinking added a commit that referenced this pull request Mar 4, 2026
…51)

* feat: Initialize PGLite v17 database data files for the dataconnect project.

* feat: enable automatic outline generation for Gemini Code Assist in VS Code settings.

* feat: Add NotebookLM integration with a new processor and `analyze_video_with_notebooklm` MCP tool.

* feat: Add NotebookLM profile data and an ingestion test.

* chore: Update and add generated browser profile files for notebooklm development.

* Update `notebooklm_chrome_profile` internal state and add architectural context documentation and video asset.

* feat: Add various knowledge prototypes for MCP servers and universal automation, archive numerous scripts and documentation, and update local browser profile data.

* chore: Add generated browser profile cache and data for notebooklm.

* Update notebooklm Chrome profile preferences, cache, and session data.

* feat: Update NotebookLM Chrome profile with new cache, preferences, and service worker data.

* feat: Add generated Chrome profile cache and code cache files and update associated profile data.

* Update `notebooklm` Chrome profile cache, code cache, GPU cache, and safe browsing data.

* chore(deps): bump the npm_and_yarn group across 4 directories with 5 updates

Bumps the npm_and_yarn group with 3 updates in the / directory: [ajv](https://github.com/ajv-validator/ajv), [hono](https://github.com/honojs/hono) and [qs](https://github.com/ljharb/qs).
Bumps the npm_and_yarn group with 3 updates in the /docs/knowledge_prototypes/mcp-servers/fetch-mcp directory: [@modelcontextprotocol/sdk](https://github.com/modelcontextprotocol/typescript-sdk), [ajv](https://github.com/ajv-validator/ajv) and [hono](https://github.com/honojs/hono).
Bumps the npm_and_yarn group with 1 update in the /scripts/archive/software-on-demand directory: [ajv](https://github.com/ajv-validator/ajv).
Bumps the npm_and_yarn group with 2 updates in the /scripts/archive/supabase_cleanup directory: [next](https://github.com/vercel/next.js) and [qs](https://github.com/ljharb/qs).


Updates `ajv` from 8.17.1 to 8.18.0
- [Release notes](https://github.com/ajv-validator/ajv/releases)
- [Commits](ajv-validator/ajv@v8.17.1...v8.18.0)

Updates `hono` from 4.11.7 to 4.12.1
- [Release notes](https://github.com/honojs/hono/releases)
- [Commits](honojs/hono@v4.11.7...v4.12.1)

Updates `qs` from 6.14.1 to 6.15.0
- [Changelog](https://github.com/ljharb/qs/blob/main/CHANGELOG.md)
- [Commits](ljharb/qs@v6.14.1...v6.15.0)

Updates `@modelcontextprotocol/sdk` from 1.25.2 to 1.26.0
- [Release notes](https://github.com/modelcontextprotocol/typescript-sdk/releases)
- [Commits](modelcontextprotocol/typescript-sdk@v1.25.2...v1.26.0)

Updates `ajv` from 8.17.1 to 8.18.0
- [Release notes](https://github.com/ajv-validator/ajv/releases)
- [Commits](ajv-validator/ajv@v8.17.1...v8.18.0)

Updates `hono` from 4.11.5 to 4.12.1
- [Release notes](https://github.com/honojs/hono/releases)
- [Commits](honojs/hono@v4.11.7...v4.12.1)

Updates `qs` from 6.14.1 to 6.15.0
- [Changelog](https://github.com/ljharb/qs/blob/main/CHANGELOG.md)
- [Commits](ljharb/qs@v6.14.1...v6.15.0)

Updates `ajv` from 8.17.1 to 8.18.0
- [Release notes](https://github.com/ajv-validator/ajv/releases)
- [Commits](ajv-validator/ajv@v8.17.1...v8.18.0)

Updates `next` from 15.4.10 to 15.5.10
- [Release notes](https://github.com/vercel/next.js/releases)
- [Changelog](https://github.com/vercel/next.js/blob/canary/release.js)
- [Commits](vercel/next.js@v15.4.10...v15.5.10)

Updates `qs` from 6.14.1 to 6.15.0
- [Changelog](https://github.com/ljharb/qs/blob/main/CHANGELOG.md)
- [Commits](ljharb/qs@v6.14.1...v6.15.0)

---
updated-dependencies:
- dependency-name: ajv
  dependency-version: 8.18.0
  dependency-type: indirect
  dependency-group: npm_and_yarn
- dependency-name: hono
  dependency-version: 4.12.1
  dependency-type: indirect
  dependency-group: npm_and_yarn
- dependency-name: qs
  dependency-version: 6.15.0
  dependency-type: indirect
  dependency-group: npm_and_yarn
- dependency-name: "@modelcontextprotocol/sdk"
  dependency-version: 1.26.0
  dependency-type: direct:production
  dependency-group: npm_and_yarn
- dependency-name: ajv
  dependency-version: 8.18.0
  dependency-type: indirect
  dependency-group: npm_and_yarn
- dependency-name: hono
  dependency-version: 4.12.1
  dependency-type: indirect
  dependency-group: npm_and_yarn
- dependency-name: qs
  dependency-version: 6.15.0
  dependency-type: indirect
  dependency-group: npm_and_yarn
- dependency-name: ajv
  dependency-version: 8.18.0
  dependency-type: direct:production
  dependency-group: npm_and_yarn
- dependency-name: next
  dependency-version: 15.5.10
  dependency-type: direct:production
  dependency-group: npm_and_yarn
- dependency-name: qs
  dependency-version: 6.15.0
  dependency-type: indirect
  dependency-group: npm_and_yarn
...

Signed-off-by: dependabot[bot] <support@github.com>

* chore(deps): bump minimatch

Bumps the npm_and_yarn group with 1 update in the /scripts/archive/supabase_cleanup directory: [minimatch](https://github.com/isaacs/minimatch).


Updates `minimatch` from 3.1.2 to 3.1.4
- [Changelog](https://github.com/isaacs/minimatch/blob/main/changelog.md)
- [Commits](isaacs/minimatch@v3.1.2...v3.1.4)

---
updated-dependencies:
- dependency-name: minimatch
  dependency-version: 3.1.4
  dependency-type: indirect
  dependency-group: npm_and_yarn
...

Signed-off-by: dependabot[bot] <support@github.com>

* chore(deps): bump the npm_and_yarn group across 2 directories with 1 update

Bumps the npm_and_yarn group with 1 update in the / directory: [hono](https://github.com/honojs/hono).
Bumps the npm_and_yarn group with 1 update in the /docs/knowledge_prototypes/mcp-servers/fetch-mcp directory: [hono](https://github.com/honojs/hono).


Updates `hono` from 4.12.1 to 4.12.2
- [Release notes](https://github.com/honojs/hono/releases)
- [Commits](honojs/hono@v4.12.1...v4.12.2)

Updates `hono` from 4.12.1 to 4.12.2
- [Release notes](https://github.com/honojs/hono/releases)
- [Commits](honojs/hono@v4.12.1...v4.12.2)

---
updated-dependencies:
- dependency-name: hono
  dependency-version: 4.12.2
  dependency-type: indirect
  dependency-group: npm_and_yarn
- dependency-name: hono
  dependency-version: 4.12.2
  dependency-type: indirect
  dependency-group: npm_and_yarn
...

Signed-off-by: dependabot[bot] <support@github.com>

* feat: enable frontend-only video ingestion pipeline for Vercel deployment

The core pipeline previously required the Python backend to be running.
When deployed to Vercel (https://v0-uvai.vercel.app/), the backend is
unavailable, causing all video analysis to fail immediately.

Changes:
- /api/video: Falls back to frontend-only pipeline (transcribe + extract)
  when the Python backend is unreachable, with 15s timeout
- /api/transcribe: Adds Gemini fallback when OpenAI is unavailable, plus
  8s timeout on backend probe to avoid hanging on Vercel
- layout.tsx: Loads Google Fonts via <link> instead of next/font/google
  to avoid build failures in offline/sandboxed CI environments
- page.tsx: Replace example URLs with technical content (3Blue1Brown
  neural networks, Karpathy LLM intro) instead of rick roll / zoo videos
- gemini_service.py: Gate Vertex AI import behind GOOGLE_CLOUD_PROJECT
  env var to prevent 30s+ hangs on the GCE metadata probe
- agent_gap_analyzer.py: Fix f-string backslash syntax errors (Python 3.11)

https://claude.ai/code/session_015Pd3a6hinTenCNrPRGiZqE

* Potential fix for code scanning alert no. 4518: Server-side request forgery

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

* Initial plan

* Potential fix for code scanning alert no. 4517: Server-side request forgery

Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>

* Initial plan

* Fix review feedback: timeout cleanup, transcript_segments shape, ENABLE_VERTEX_AI boolean parsing

Co-authored-by: groupthinking <154503486+groupthinking@users.noreply.github.com>

* fix: clearTimeout in finally blocks, transcript_segments shape, ENABLE_VERTEX_AI boolean parsing

Co-authored-by: groupthinking <154503486+groupthinking@users.noreply.github.com>

* Update src/youtube_extension/services/ai/gemini_service.py

Co-authored-by: vercel[bot] <35613825+vercel[bot]@users.noreply.github.com>

* Update apps/web/src/app/api/video/route.ts

Co-authored-by: vercel[bot] <35613825+vercel[bot]@users.noreply.github.com>

* Update apps/web/src/app/api/video/route.ts

Co-authored-by: vercel[bot] <35613825+vercel[bot]@users.noreply.github.com>

* Initial plan

* Initial plan

* Fix: move clearTimeout into .finally() to prevent timer leaks on fetch abort/error

Co-authored-by: groupthinking <154503486+groupthinking@users.noreply.github.com>

* Fix clearTimeout not called in finally blocks for AbortController timeouts

Co-authored-by: groupthinking <154503486+groupthinking@users.noreply.github.com>

* Fix: Relative URLs in server-side fetch calls fail in production - fetch('/api/transcribe') and fetch('/api/extract-events') use relative URLs which don't resolve correctly in server-side Next.js code on production deployments like Vercel.

This commit fixes the issue reported at apps/web/src/app/api/video/route.ts:101

## Bug Analysis

**Why it happens:**
In Next.js API routes running on the server (Node.js runtime), the `fetch()` API requires absolute URLs. Unlike browsers which have an implicit base URL (the current origin), server-side code has no context for resolving relative URLs like `/api/transcribe`. The Node.js fetch implementation will fail to resolve these relative paths, resulting in TypeError or connection errors.

**When it manifests:**
- **Development (localhost:3000)**: Works accidentally because the request URL contains the host
- **Production (Vercel)**: Fails because the relative URL cannot be resolved to a valid absolute URL without proper host context

**What impact it has:**
The frontend-only pipeline fallback (Strategy 2) in lines 101-132 is completely broken in production. When the backend is unavailable (common on Vercel), the code attempts to use `/api/transcribe` and `/api/extract-events` serverless functions but fails due to unresolvable relative URLs. This causes the entire video analysis endpoint to fail when the backend is unavailable.

## Fix Explanation

**Changes made:**
1. Added a `getBaseUrl(request: Request)` helper function that extracts the absolute base URL from the incoming request object using `new URL(request.url)`
2. Updated line 108: `fetch('/api/transcribe', ...)` → `fetch(`${baseUrl}/api/transcribe`, ...)`
3. Updated line 127: `fetch('/api/extract-events', ...)` → `fetch(`${baseUrl}/api/extract-events`, ...)`

**Why it solves the issue:**
- The incoming `request` object contains the full URL including protocol and host
- By constructing an absolute URL from the request, we ensure the fetch calls work in both development and production
- This approach is more reliable than environment variables because it uses the actual request context, handling reverse proxies and different deployment configurations correctly

Co-authored-by: Vercel <vercel[bot]@users.noreply.github.com>
Co-authored-by: groupthinking <garveyht@gmail.com>

* Initial plan

* chore(deps): bump the npm_and_yarn group across 1 directory with 1 update

Bumps the npm_and_yarn group with 1 update in the /docs/knowledge_prototypes/mcp-servers/fetch-mcp directory: [minimatch](https://github.com/isaacs/minimatch).


Updates `minimatch` from 3.1.2 to 3.1.5
- [Changelog](https://github.com/isaacs/minimatch/blob/main/changelog.md)
- [Commits](isaacs/minimatch@v3.1.2...v3.1.5)

Updates `minimatch` from 5.1.6 to 5.1.9
- [Changelog](https://github.com/isaacs/minimatch/blob/main/changelog.md)
- [Commits](isaacs/minimatch@v3.1.2...v3.1.5)

---
updated-dependencies:
- dependency-name: minimatch
  dependency-version: 3.1.5
  dependency-type: indirect
  dependency-group: npm_and_yarn
- dependency-name: minimatch
  dependency-version: 5.1.9
  dependency-type: indirect
  dependency-group: npm_and_yarn
...

Signed-off-by: dependabot[bot] <support@github.com>

* fix: validate BACKEND_URL before using it

Skip backend calls entirely when BACKEND_URL is not configured or
contains an invalid value (like a literal ${...} template string).
This prevents URL parse errors on Vercel where the env var may not
be set.

https://claude.ai/code/session_015Pd3a6hinTenCNrPRGiZqE

* fix: resolve embeddings package build errors (#41)

- Create stub types for Firebase Data Connect SDK in src/dataconnect-generated/
- Fix import path from ../dataconnect-generated to ./dataconnect-generated (rootDir constraint)
- Add explicit type assertions for JSON responses (predictions, access_token)
- All 6 TypeScript errors resolved, clean build verified

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* feat: Gemini SDK upgrade + VideoPack schema alignment (#43)

* chore: Update generated Chrome profile cache and session data for notebooklm.

* chore: refresh notebooklm Chrome profile data, including Safe Browsing lists, caches, and session files.

* Update local application cache and database files within the NotebookLM Chrome profile.

* chore: update Chrome profile cache and Safe Browsing data files.

* feat: upgrade Gemini to @google/genai SDK with structured output, search grounding, video URL processing, and extend VideoPack schema

- Upgrade extract-events/route.ts from @google/generative-ai to @google/genai
- Add Gemini responseSchema with Type system for structured output enforcement
- Add Google Search grounding (googleSearch tool) to Gemini calls
- Upgrade transcribe/route.ts to @google/genai with direct YouTube URL processing via fileData
- Add Gemini video URL fallback chain: direct video → text+search → other strategies
- Extend VideoPackV0 schema with Chapter, CodeCue, Task models
- Update versioning shim for new fields
- Export new types from videopack __init__

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* feat: wire CloudEvents pipeline + Chrome Built-in AI fallback (#44)

- Add TypeScript CloudEvents publisher (apps/web/src/lib/cloudevents.ts)
  emitting standardized events at each video processing stage
- Wire CloudEvents into /api/video route (both backend + frontend strategies)
- Wire CloudEvents into FastAPI backend router (process_video_v1 endpoint)
- Add Chrome Built-in AI service (Prompt API + Summarizer API) for
  on-device client-side transcript analysis when API keys are unavailable
- Add useBuiltInAI React hook for component integration
- Add .next/ to .gitignore

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* feat: wire A2A inter-agent messaging into orchestrator + API (#45)

- Add A2AContextMessage dataclass to AgentOrchestrator for lightweight
  inter-agent context sharing during parallel task execution
- Auto-broadcast agent results to peer agents after parallel execution
- Add send_a2a_message() and get_a2a_log() methods to orchestrator
- Add POST /api/v1/agents/a2a/send endpoint for frontend-to-agent messaging
- Add GET /api/v1/agents/a2a/log endpoint to query message history
- Extend frontend agentService with sendA2AMessage() and getA2ALog()

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* feat: add LiteRT-LM setup script and update README (#46)

- Add setup.sh to download lit CLI binary and .litertlm model
- Support macOS arm64 and x86_64 architectures
- Auto-generate .env with LIT_BINARY_PATH and LIT_MODEL_PATH
- Add .gitignore for bin/, models/, .env
- Update README with Quick Setup section

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* feat: implement Gemini agentic video analysis with Google Search grounding (#47)

- Create gemini-video-analyzer.ts: single Gemini call with googleSearch
  tool for transcript extraction AND event analysis (PK=998 pattern)
- Add youtube-metadata.ts: scrapes title, description, chapters from
  YouTube without API key
- Update /api/video: Gemini agentic analysis as primary strategy,
  transcribe→extract chain as fallback
- Fix /api/transcribe: remove broken fileData.fileUri, use Gemini
  Google Search grounding as primary, add metadata context, filter
  garbage OpenAI results
- Fix /api/extract-events: accept videoUrl without requiring transcript,
  direct Gemini analysis via Google Search when no transcript available

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* fix: support Vertex_AI_API_KEY as Gemini key fallback

Create shared gemini-client.ts that resolves API key from:
GEMINI_API_KEY → GOOGLE_API_KEY → Vertex_AI_API_KEY

All API routes now use the shared client instead of
hardcoding process.env.GEMINI_API_KEY.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* fix: use Vertex AI Express Mode for Vertex_AI_API_KEY

When only Vertex_AI_API_KEY is set (no GEMINI_API_KEY), the client
now initializes in Vertex AI mode with vertexai: true + apiKey.
Uses project uvai-730bb and us-central1 as defaults.

Also added GOOGLE_CLOUD_PROJECT env var to Vercel.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* fix: Vertex AI Express Mode compatibility — remove responseSchema+googleSearch conflict (#48)

Vertex AI does not support controlled generation (responseSchema) combined
with the googleSearch tool. This caused 400 errors on every Gemini call.

Changes:
- gemini-client.ts: Prioritize Vertex_AI_API_KEY, support GOOGLE_GENAI_USE_VERTEXAI env var
- gemini-video-analyzer.ts: Remove responseSchema, enforce JSON via prompt instructions
- extract-events/route.ts: Same fix for extractWithGemini and inline Gemini calls
- Strip markdown code fences from responses before JSON parsing

Tested end-to-end with Vertex AI Express Mode key against multiple YouTube videos.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* fix: restore full PK=998 pattern — responseSchema + googleSearch + gemini-3-pro-preview (#49)

The previous fix (PR #48) was a shortcut — it removed responseSchema when
the real issue was using gemini-2.5-flash which doesn't support
responseSchema + googleSearch together on Vertex AI.

gemini-3-pro-preview DOES support the combination. This commit restores
the exact PK=998 pattern:

- gemini-video-analyzer.ts: Restored responseSchema with Type system,
  responseMimeType, e22Snippets field, model → gemini-3-pro-preview
- extract-events/route.ts: Restored geminiResponseSchema, Type import,
  responseMimeType, model → gemini-3-pro-preview
- transcribe/route.ts: model → gemini-3-pro-preview

Tested with Vertex AI Express Mode key on two YouTube videos.
Both return structured JSON with events, transcript, actions,
codeMapping, cloudService, e22Snippets, architectureCode, ingestScript.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* feat: end-to-end pipeline — YouTube URL to deployed software (#50)

- Add /api/pipeline route for full end-to-end pipeline
  (video analysis → code generation → GitHub repo → Vercel deploy)
- Add deployPipeline() action to dashboard store with stage tracking
- Add 🚀 Deploy button to dashboard alongside Analyze
- Show pipeline results (live URL, GitHub repo, framework) in video cards
- Fix deployment_manager import path in video_processing_service
- Wire pipeline to backend /api/v1/video-to-software endpoint
- Fallback to Gemini-only analysis when no backend available

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* fix: add writable directories to Docker image for deployment pipeline

Create /app/generated_projects, /app/youtube_processed_videos, and
/tmp/uvai_data directories in Dockerfile to fix permission denied
errors in the deployment and video processing pipeline on Railway.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* fix: security hardening, video-specific codegen, API consistency

- CORS: replace wildcard/glob with explicit allowed origins in both entry points
- Rate limiting: enable 60 req/min with 15 burst on backend
- API auth: add optional X-API-Key middleware for pipeline endpoints
- Codegen: generate video-specific HTML/CSS/JS from analysis output
- API: accept both 'url' and 'video_url' via Pydantic alias
- Deploy: fix Vercel REST API payload format (gitSource instead of gitRepository)

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* fix: Vercel deployment returning empty live_url

Root causes fixed:
- Case mismatch in _poll_deployment_status: compared lowercased status
  against uppercase success_statuses list, so READY was never matched
- Vercel API returns bare domain URLs without https:// prefix; added
  _ensure_https() to normalize them
- Poll requests were missing auth headers, causing 401 failures
- _deploy_files_directly fallback returned fake simulated URLs that
  masked real failures; removed in favor of proper error reporting
- _generate_deployment_urls only returned URLs from 'success' status
  deployments, discarding useful fallback URLs from failed deployments

Improvements:
- On API failure (permissions, plan limits), return a Vercel import URL
  the user can click to deploy manually instead of an empty string
- Support VERCEL_ORG_ID team scoping on deploy and poll endpoints
- Use readyState field (Vercel v13 API) for initial status check
- Add 'canceled' to failure status list in poll loop
- Poll failures are now non-fatal; initial URL is used as fallback

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* fix: harden slim entry point — CORS, rate limiting, auth, security headers

- Add uvaiio.vercel.app to CORS allowed origins
- Add slowapi rate limiting (60 req/min)
- Add API key auth middleware (optional via EVENTRELAY_API_KEY)
- Add security headers (X-Content-Type-Options, X-Frame-Options, X-XSS-Protection)
- Fixes production gap where slim main.py had none of the backend/main.py protections

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* fix: resolve Pydantic Config/model_config conflict breaking Railway deploy

The VideoToSoftwareRequest model had both 'model_config = ConfigDict(...)' and
'class Config:' which Pydantic v2 rejects. Merged into single model_config.
This was causing the v1 router to fail loading, making /api/v1/health return 404.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com>
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: vercel[bot] <35613825+vercel[bot]@users.noreply.github.com>
Co-authored-by: Vercel <vercel[bot]@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants