Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
71 commits
Select commit Hold shift + click to select a range
5b5306d
feat: Initialize PGLite v17 database data files for the dataconnect p…
groupthinking Feb 13, 2026
3ed838c
feat: enable automatic outline generation for Gemini Code Assist in V…
groupthinking Feb 14, 2026
a587381
feat: Add NotebookLM integration with a new processor and `analyze_vi…
groupthinking Feb 14, 2026
e352876
feat: Add NotebookLM profile data and an ingestion test.
groupthinking Feb 15, 2026
22b5088
chore: Update and add generated browser profile files for notebooklm …
groupthinking Feb 15, 2026
a285dfd
Update `notebooklm_chrome_profile` internal state and add architectur…
groupthinking Feb 15, 2026
ce78244
feat: Add various knowledge prototypes for MCP servers and universal …
groupthinking Feb 16, 2026
f58d954
chore: Add generated browser profile cache and data for notebooklm.
groupthinking Feb 16, 2026
abbaa43
Update notebooklm Chrome profile preferences, cache, and session data.
groupthinking Feb 16, 2026
b19d73c
feat: Update NotebookLM Chrome profile with new cache, preferences, a…
groupthinking Feb 16, 2026
c23818d
feat: Add generated Chrome profile cache and code cache files and upd…
groupthinking Feb 16, 2026
534edc3
Update `notebooklm` Chrome profile cache, code cache, GPU cache, and …
groupthinking Feb 16, 2026
b3a309c
chore(deps): bump the npm_and_yarn group across 4 directories with 5 …
dependabot[bot] Feb 22, 2026
50394be
Merge pull request #30 from groupthinking/dependabot/npm_and_yarn/npm…
groupthinking Feb 25, 2026
38df181
chore(deps): bump minimatch
dependabot[bot] Feb 25, 2026
094c9cf
Merge pull request #31 from groupthinking/dependabot/npm_and_yarn/scr…
groupthinking Feb 25, 2026
02d2cb8
chore(deps): bump the npm_and_yarn group across 2 directories with 1 …
dependabot[bot] Feb 25, 2026
db6541f
feat: enable frontend-only video ingestion pipeline for Vercel deploy…
claude Feb 27, 2026
c095d73
Potential fix for code scanning alert no. 4518: Server-side request f…
groupthinking Feb 27, 2026
b0629d7
Initial plan
Copilot Feb 27, 2026
410ce1a
Potential fix for code scanning alert no. 4517: Server-side request f…
groupthinking Feb 27, 2026
e3cf036
Initial plan
Copilot Feb 27, 2026
bb808ff
Fix review feedback: timeout cleanup, transcript_segments shape, ENAB…
Copilot Feb 27, 2026
a9c7665
fix: clearTimeout in finally blocks, transcript_segments shape, ENABL…
Copilot Feb 27, 2026
2288b6c
Update src/youtube_extension/services/ai/gemini_service.py
groupthinking Feb 27, 2026
ff6d5e4
Update apps/web/src/app/api/video/route.ts
groupthinking Feb 27, 2026
8792227
Update apps/web/src/app/api/video/route.ts
groupthinking Feb 27, 2026
1ec87f3
Initial plan
Copilot Feb 27, 2026
86a4208
Initial plan
Copilot Feb 27, 2026
d59ce5d
Merge branch 'claude/slack-check-status-update-R47Ph' into copilot/su…
groupthinking Feb 27, 2026
0b66974
Fix: move clearTimeout into .finally() to prevent timer leaks on fetc…
Copilot Feb 27, 2026
8847930
Fix clearTimeout not called in finally blocks for AbortController tim…
Copilot Feb 27, 2026
91e1c84
Merge branch 'claude/slack-check-status-update-R47Ph' into copilot/su…
groupthinking Feb 27, 2026
f925e30
Fix: Relative URLs in server-side fetch calls fail in production - fe…
vercel[bot] Feb 27, 2026
3be0d81
Initial plan
Copilot Feb 27, 2026
8e6ae78
Merge branch 'claude/slack-check-status-update-R47Ph' into copilot/su…
groupthinking Feb 27, 2026
2cd9b5b
Merge pull request #32 from groupthinking/dependabot/npm_and_yarn/npm…
groupthinking Feb 27, 2026
e95ce42
Merge branch 'claude/slack-check-status-update-R47Ph' into copilot/su…
groupthinking Feb 27, 2026
ebf0327
Merge pull request #37 from groupthinking/copilot/sub-pr-33-yet-again
groupthinking Feb 27, 2026
ff05ed3
chore(deps): bump the npm_and_yarn group across 1 directory with 1 up…
dependabot[bot] Feb 27, 2026
36ebbf6
Merge branch 'claude/slack-check-status-update-R47Ph' into copilot/su…
groupthinking Feb 27, 2026
d95c7f7
Merge branch 'main' into claude/slack-check-status-update-R47Ph
groupthinking Feb 27, 2026
2296f5e
Merge pull request #33 from groupthinking/claude/slack-check-status-u…
groupthinking Feb 27, 2026
a148b3c
Merge branch 'main' into copilot/sub-pr-33-one-more-time
groupthinking Feb 27, 2026
ed013e1
Merge pull request #38 from groupthinking/copilot/sub-pr-33-one-more-…
groupthinking Feb 27, 2026
0995d78
Merge pull request #39 from groupthinking/dependabot/npm_and_yarn/doc…
groupthinking Feb 27, 2026
2b9c1f4
Merge branch 'main' into copilot/sub-pr-33-another-one
groupthinking Feb 27, 2026
6ecc4c3
Merge pull request #36 from groupthinking/copilot/sub-pr-33-another-one
groupthinking Feb 27, 2026
279c7fd
Merge branch 'main' into copilot/sub-pr-33-again
groupthinking Feb 27, 2026
4a32a44
Merge pull request #35 from groupthinking/copilot/sub-pr-33-again
groupthinking Feb 27, 2026
8b96dce
Merge branch 'main' into copilot/sub-pr-33
groupthinking Feb 27, 2026
ee6fc60
Merge pull request #34 from groupthinking/copilot/sub-pr-33
groupthinking Feb 27, 2026
d9e2fc6
fix: validate BACKEND_URL before using it
claude Feb 28, 2026
5640a32
Merge branch 'main' into claude/slack-check-status-update-R47Ph
groupthinking Feb 28, 2026
0fd9e90
Merge pull request #40 from groupthinking/claude/slack-check-status-u…
groupthinking Feb 28, 2026
12915db
fix: resolve embeddings package build errors (#41)
groupthinking Feb 28, 2026
456954b
feat: Gemini SDK upgrade + VideoPack schema alignment (#43)
groupthinking Feb 28, 2026
a2075c3
feat: wire CloudEvents pipeline + Chrome Built-in AI fallback (#44)
groupthinking Feb 28, 2026
ee035ff
feat: wire A2A inter-agent messaging into orchestrator + API (#45)
groupthinking Feb 28, 2026
50dc924
feat: add LiteRT-LM setup script and update README (#46)
groupthinking Feb 28, 2026
d0cacd1
feat: implement Gemini agentic video analysis with Google Search grou…
groupthinking Feb 28, 2026
98de680
fix: support Vertex_AI_API_KEY as Gemini key fallback
groupthinking Feb 28, 2026
f170b18
fix: use Vertex AI Express Mode for Vertex_AI_API_KEY
groupthinking Feb 28, 2026
924afa2
fix: Vertex AI Express Mode compatibility — remove responseSchema+goo…
groupthinking Feb 28, 2026
f38b34c
fix: restore full PK=998 pattern — responseSchema + googleSearch + ge…
groupthinking Feb 28, 2026
fb65641
feat: end-to-end pipeline — YouTube URL to deployed software (#50)
groupthinking Mar 1, 2026
f8a970a
fix: add writable directories to Docker image for deployment pipeline
groupthinking Mar 1, 2026
c1eea83
fix: security hardening, video-specific codegen, API consistency
groupthinking Mar 2, 2026
7ddd93b
fix: Vercel deployment returning empty live_url
groupthinking Mar 2, 2026
6a7ff52
fix: harden slim entry point — CORS, rate limiting, auth, security he…
groupthinking Mar 2, 2026
aef4d51
fix: resolve Pydantic Config/model_config conflict breaking Railway d…
groupthinking Mar 2, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
The diff you're trying to view is too large. We only load the first 3000 changed files.
1 change: 1 addition & 0 deletions .firebase/.graphqlrc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"schema":["../dataconnect/.dataconnect/**/*.gql","../dataconnect/schema/**/*.gql"],"document":["../dataconnect/example/**/*.gql"]}
7 changes: 7 additions & 0 deletions .firebaserc
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
{
"projects": {
"default": "uvai-730bb"
},
"targets": {},
"etags": {}
}
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -155,3 +155,4 @@ UVAI_Digital_Refinery_Blueprint.pdf
*.db
.vercel
.env*.local
.next/
3 changes: 2 additions & 1 deletion .vscode/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -81,5 +81,6 @@
"dart.enableCompletionCommitCharacters": true,
"geminicodeassist.codeGenerationPaneViewEnabled": true,
"geminicodeassist.inlineSuggestions.nextEditPredictions": true,
"geminicodeassist.inlineSuggestions.suggestionSpeed": "Fast"
"geminicodeassist.inlineSuggestions.suggestionSpeed": "Fast",
"geminicodeassist.outlines.automaticOutlineGeneration": true
}
4 changes: 2 additions & 2 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -49,8 +49,8 @@ COPY --chown=uvai:uvai src/ ./src/
COPY --chown=uvai:uvai pyproject.toml ./

# Create data directories
RUN mkdir -p /app/data/enhanced_analysis /app/data/cache /app/logs && \
chown -R uvai:uvai /app/data /app/logs
RUN mkdir -p /app/data/enhanced_analysis /app/data/cache /app/logs /app/generated_projects /app/youtube_processed_videos /tmp/uvai_data && \
chown -R uvai:uvai /app/data /app/logs /app/generated_projects /app/youtube_processed_videos /tmp/uvai_data

# Switch to non-root user
USER uvai
Expand Down
1 change: 1 addition & 0 deletions apps/web/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
"lint": "next lint"
},
"dependencies": {
"@google/genai": "^1.43.0",
"@google/generative-ai": "^0.24.1",
"@stripe/stripe-js": "^2.0.0",
"@supabase/supabase-js": "^2.39.0",
Expand Down
137 changes: 103 additions & 34 deletions apps/web/src/app/api/extract-events/route.ts
Original file line number Diff line number Diff line change
@@ -1,20 +1,15 @@
import OpenAI from 'openai';
import { GoogleGenerativeAI } from '@google/generative-ai';
import { Type } from '@google/genai';
import { NextResponse } from 'next/server';
import { getGeminiClient, hasGeminiKey } from '@/lib/gemini-client';

let _openai: OpenAI | null = null;
function getOpenAI() {
if (!_openai) _openai = new OpenAI();
return _openai;
}

let _gemini: GoogleGenerativeAI | null = null;
function getGemini() {
if (!_gemini) _gemini = new GoogleGenerativeAI(process.env.GEMINI_API_KEY || '');
return _gemini;
}

// JSON Schema for structured extraction via Responses API
// JSON Schema for structured extraction via OpenAI Responses API
const extractionSchema = {
type: 'object' as const,
properties: {
Expand Down Expand Up @@ -54,6 +49,43 @@ const extractionSchema = {
additionalProperties: false,
};

// Gemini responseSchema using @google/genai Type system
const geminiResponseSchema = {
type: Type.OBJECT,
properties: {
events: {
type: Type.ARRAY,
items: {
type: Type.OBJECT,
properties: {
type: { type: Type.STRING, enum: ['action', 'topic', 'insight', 'tool', 'resource'] },
title: { type: Type.STRING },
description: { type: Type.STRING },
timestamp: { type: Type.STRING, nullable: true },
priority: { type: Type.STRING, enum: ['high', 'medium', 'low'] },
},
required: ['type', 'title', 'description', 'priority'],
},
},
actions: {
type: Type.ARRAY,
items: {
type: Type.OBJECT,
properties: {
title: { type: Type.STRING },
description: { type: Type.STRING },
category: { type: Type.STRING, enum: ['setup', 'build', 'deploy', 'learn', 'research', 'configure'] },
estimatedMinutes: { type: Type.NUMBER, nullable: true },
},
required: ['title', 'description', 'category'],
},
},
summary: { type: Type.STRING },
topics: { type: Type.ARRAY, items: { type: Type.STRING } },
},
required: ['events', 'actions', 'summary', 'topics'],
};

const SYSTEM_PROMPT = `You are an expert content analyst. Extract structured data from video transcripts.
Be specific and practical — no vague or generic items.
For events: classify type (action/topic/insight/tool/resource) and priority (high/medium/low).
Expand Down Expand Up @@ -94,54 +126,91 @@ async function extractWithOpenAI(trimmed: string, videoTitle?: string, videoUrl?
}

async function extractWithGemini(trimmed: string, videoTitle?: string, videoUrl?: string) {
const model = getGemini().getGenerativeModel({
model: 'gemini-2.0-flash',
generationConfig: {
responseMimeType: 'application/json',
const ai = getGeminiClient();
const response = await ai.models.generateContent({
model: 'gemini-3-pro-preview',
contents: `${SYSTEM_PROMPT}\n\n${buildUserPrompt(trimmed, videoTitle, videoUrl)}`,
config: {
temperature: 0.3,
responseMimeType: 'application/json',
responseSchema: geminiResponseSchema,
tools: [{ googleSearch: {} }],
},
});
const result = await model.generateContent(`${SYSTEM_PROMPT}\n\n${buildUserPrompt(trimmed, videoTitle, videoUrl)}`);
const text = result.response.text();
const text = response.text ?? '';
return JSON.parse(text);
Comment on lines +140 to 141
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This direct call to JSON.parse() is unsafe. If response.text from the Gemini API is an empty string (which is possible if the model returns no content), this will throw an unhandled exception and cause the API route to crash with a 500 error. You should gracefully handle the case of an empty or invalid JSON string before parsing.

Suggested change
const text = response.text ?? '';
return JSON.parse(text);
const text = response.text ?? '';
return text ? JSON.parse(text) : {};

}

export async function POST(request: Request) {
try {
const { transcript, videoTitle, videoUrl } = await request.json();

if (!transcript || typeof transcript !== 'string') {
// Accept either transcript text OR videoUrl for direct Gemini analysis
if ((!transcript || typeof transcript !== 'string') && !videoUrl) {
return NextResponse.json(
{ error: 'transcript (string) is required' },
{ error: 'transcript (string) or videoUrl is required' },
{ status: 400 }
);
}

const trimmed = transcript.slice(0, 8000);
let parsed;
let provider = 'openai';

// Try OpenAI first, fall back to Gemini on quota/auth errors
if (process.env.OPENAI_API_KEY) {
try {
parsed = await extractWithOpenAI(trimmed, videoTitle, videoUrl);
} catch (err) {
const msg = err instanceof Error ? err.message : '';
if ((msg.includes('429') || msg.includes('quota') || msg.includes('rate')) && process.env.GEMINI_API_KEY) {
console.warn('OpenAI quota hit, falling back to Gemini');
parsed = await extractWithGemini(trimmed, videoTitle, videoUrl);
provider = 'gemini';
} else {
throw err;
// If we have transcript text, use the existing extraction logic
if (transcript && typeof transcript === 'string' && transcript.length > 50) {
const trimmed = transcript.slice(0, 8000);

if (process.env.OPENAI_API_KEY) {
try {
parsed = await extractWithOpenAI(trimmed, videoTitle, videoUrl);
} catch (err) {
const msg = err instanceof Error ? err.message : '';
if ((msg.includes('429') || msg.includes('quota') || msg.includes('rate')) && hasGeminiKey()) {
console.warn('OpenAI quota hit, falling back to Gemini');
parsed = await extractWithGemini(trimmed, videoTitle, videoUrl);
provider = 'gemini';
} else {
throw err;
}
}
} else if (hasGeminiKey()) {
parsed = await extractWithGemini(trimmed, videoTitle, videoUrl);
provider = 'gemini';
}
} else if (process.env.GEMINI_API_KEY) {
parsed = await extractWithGemini(trimmed, videoTitle, videoUrl);
provider = 'gemini';
} else {
}

// If no transcript but have videoUrl + Gemini, do direct video analysis via Google Search
if (!parsed && videoUrl && hasGeminiKey()) {
try {
const ai = getGeminiClient();
const response = await ai.models.generateContent({
model: 'gemini-3-pro-preview',
contents: `${SYSTEM_PROMPT}\n\nAnalyze this YouTube video and extract structured data.
Use your Google Search tool to find the video's transcript, description, and chapter content.

Video URL: ${videoUrl}
${videoTitle ? `Video Title: ${videoTitle}` : ''}

Extract events, actions, summary, and topics from the actual video content found via search.`,
config: {
temperature: 0.3,
responseMimeType: 'application/json',
responseSchema: geminiResponseSchema,
tools: [{ googleSearch: {} }],
},
});
const text = response.text ?? '';
parsed = JSON.parse(text);
provider = 'gemini-search';
} catch (e) {
console.warn('Gemini direct video extraction failed:', e);
}
}

if (!parsed) {
return NextResponse.json({
success: false,
error: 'No AI API key configured. Set OPENAI_API_KEY or GEMINI_API_KEY.',
error: 'No AI API key configured or all extraction attempts failed. Set GEMINI_API_KEY.',
Copy link

Copilot AI Mar 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This error message is misleading because extraction can succeed with OPENAI_API_KEY alone, and failure here may also reflect provider/runtime errors (not only missing Gemini). Consider revising the message to mention both keys (OPENAI_API_KEY/GEMINI_API_KEY) and/or distinguish between “no provider configured” vs “all providers failed” based on the environment and attempted paths.

Suggested change
error: 'No AI API key configured or all extraction attempts failed. Set GEMINI_API_KEY.',
error:
'No AI providers succeeded. Either no API keys are configured (missing OPENAI_API_KEY and/or GEMINI_API_KEY) or all extraction attempts failed at runtime.',

Copilot uses AI. Check for mistakes.
data: { events: [], actions: [], summary: '', topics: [] },
});
}
Expand Down
163 changes: 163 additions & 0 deletions apps/web/src/app/api/pipeline/route.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,163 @@
import { NextResponse } from 'next/server';
import { publishEvent, EventTypes } from '@/lib/cloudevents';
import { analyzeVideoWithGemini } from '@/lib/gemini-video-analyzer';
import { hasGeminiKey } from '@/lib/gemini-client';

const rawBackendUrl = process.env.BACKEND_URL || '';
const BACKEND_URL = rawBackendUrl.startsWith('http') ? rawBackendUrl : 'http://localhost:8000';
const BACKEND_AVAILABLE = rawBackendUrl.startsWith('http');

/**
* POST /api/pipeline
*
* End-to-end pipeline: YouTube URL → Video Analysis → Code Generation → Deployment → Live URL
*
* This is the FULL pipeline that the user's notes describe (PK=999, PK=1021):
* Ingest → Translate → Transport → Execute
*
* Strategies:
* 1. Backend pipeline (FastAPI /api/v1/video-to-software) — full pipeline with agents
* 2. Gemini analysis + frontend deployment — when no backend is available
*/
export async function POST(request: Request) {
let videoUrl: string | undefined;
try {
const body = await request.json();
const { url, project_type = 'web', deployment_target = 'vercel', features } = body;
videoUrl = url;

if (!url) {
return NextResponse.json({ error: 'Video URL is required' }, { status: 400 });
}

await publishEvent(EventTypes.VIDEO_RECEIVED, { url, pipeline: 'end-to-end' }, url);

// ── Strategy 1: Full backend pipeline (FastAPI video-to-software) ──
if (BACKEND_AVAILABLE) {
try {
const controller = new AbortController();
const timeout = setTimeout(() => controller.abort(), 300_000); // 5 min for full pipeline

let response: Response;
try {
response = await fetch(`${BACKEND_URL}/api/v1/video-to-software`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
video_url: url,
project_type,
deployment_target,
features: features || ['responsive_design', 'modern_ui'],
}),
signal: controller.signal,
});
} finally {
clearTimeout(timeout);
}

if (response.ok) {
const result = await response.json();

await publishEvent(EventTypes.PIPELINE_COMPLETED, {
strategy: 'backend-pipeline',
success: result.status === 'success',
live_url: result.live_url,
github_repo: result.github_repo,
build_status: result.build_status,
}, url);

return NextResponse.json({
id: `pipeline_${Date.now().toString(36)}`,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using Date.now().toString(36) to generate an ID is not guaranteed to be unique, especially if multiple requests are processed concurrently. This could lead to collisions and unexpected behavior. It's better to use a cryptographically secure random UUID to ensure uniqueness.

Suggested change
id: `pipeline_${Date.now().toString(36)}`,
id: `pipeline_${crypto.randomUUID()}`,

status: result.status || 'complete',
pipeline: 'backend',
processing_time: result.processing_time,
result: {
live_url: result.live_url,
github_repo: result.github_repo,
build_status: result.build_status,
video_analysis: result.video_analysis,
code_generation: result.code_generation,
deployment: result.deployment,
features_implemented: result.features_implemented,
},
});
}
console.warn(`Backend pipeline returned ${response.status}, falling back`);
} catch (e) {
console.log('Backend pipeline unavailable:', e);
}
}

// ── Strategy 2: Gemini analysis (video intelligence only, no deployment) ──
if (hasGeminiKey()) {
try {
const startTime = Date.now();
const analysis = await analyzeVideoWithGemini(url);
const elapsed = Date.now() - startTime;

await publishEvent(EventTypes.PIPELINE_COMPLETED, {
strategy: 'gemini-analysis-only',
success: true,
note: 'Backend unavailable — analysis only, no deployment',
}, url);

return NextResponse.json({
id: `pipeline_${Date.now().toString(36)}`,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using Date.now().toString(36) to generate an ID is not guaranteed to be unique, especially if multiple requests are processed concurrently. This could lead to collisions and unexpected behavior. It's better to use a cryptographically secure random UUID to ensure uniqueness.

Suggested change
id: `pipeline_${Date.now().toString(36)}`,
id: `pipeline_${crypto.randomUUID()}`,

status: 'partial',
pipeline: 'gemini-only',
processing_time: `${(elapsed / 1000).toFixed(1)}s`,
result: {
live_url: null,
github_repo: null,
build_status: 'not_attempted',
video_analysis: {
title: analysis.title,
summary: analysis.summary,
events: analysis.events,
actions: analysis.actions,
topics: analysis.topics,
architectureCode: analysis.architectureCode,
},
code_generation: null,
deployment: null,
message: 'Backend pipeline unavailable. Video analysis complete but code generation and deployment require the Python backend.',
},
});
} catch (e) {
console.error('Gemini analysis failed:', e);
}
}

return NextResponse.json(
{ error: 'No pipeline available. Configure BACKEND_URL for full pipeline or GEMINI_API_KEY for analysis only.' },
{ status: 503 },
);
} catch (error) {
console.error('Pipeline error:', error);
await publishEvent(EventTypes.PIPELINE_FAILED, { error: String(error) }, videoUrl).catch(() => {});
return NextResponse.json(
{ error: 'Pipeline failed', details: String(error) },
{ status: 500 },
);
}
}

export async function GET() {
return NextResponse.json({
name: 'EventRelay End-to-End Pipeline',
version: '1.0.0',
description: 'YouTube URL → Video Analysis → Code Generation → Deployment → Live URL',
pipeline_stages: [
'1. Ingest: Gemini analyzes video content with Google Search grounding',
'2. Translate: Structured output → VideoPack artifact',
'3. Transport: CloudEvents published at each stage',
'4. Execute: Agents generate code, create repo, deploy to Vercel',
],
backend_available: BACKEND_AVAILABLE,
gemini_available: hasGeminiKey(),
endpoints: {
pipeline: 'POST /api/pipeline - Full end-to-end pipeline',
video: 'POST /api/video - Video analysis only',
},
});
}
Loading
Loading