Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KaibanJS AI App #48

Draft
wants to merge 4 commits into
base: openai-chat
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 29 additions & 0 deletions examples/kaibanjs-ai-chat/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Dependencies
node_modules/
yarn-debug.log*
yarn-error.log*

# Environment variables
.env
.env.local
.env.*

# Build output
dist/
build/

# OS files
.DS_Store
Thumbs.db

# IDE and editor files
.idea/
.vscode/
*.swp
*.swo

# Astro
.astro/

# Netlify
.netlify/
1 change: 1 addition & 0 deletions examples/kaibanjs-ai-chat/.nvmrc
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
v20
63 changes: 63 additions & 0 deletions examples/kaibanjs-ai-chat/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# AI Chat App

A simple AI chat application built with Astro, React, and OpenAI, deployed on Netlify.

## Features

- Real-time chat interface
- Streaming responses from OpenAI's GPT-3.5 Turbo
- Modern, responsive design
- Serverless architecture using Netlify Functions

## Prerequisites

- Node.js (v18 or later)
- npm
- OpenAI API key
- Netlify account (for deployment)

## Setup

1. Clone the repository:

```bash
git clone <repository-url>
cd kaibanjs-ai-chat
```

2. Install dependencies:

```bash
npm install
```

3. Create a `.env` file in the root directory and add your OpenAI API key:
```
OPENAI_API_KEY=your_openai_api_key_here
```

## Development

To run the development server:

```bash
npm run dev
```

The application will be available at `http://localhost:4321`.

## Deployment

1. Push your code to GitHub

2. Connect your repository to Netlify

3. Configure the environment variable in Netlify:

- Add `OPENAI_API_KEY` in your Netlify environment variables

4. Deploy! Netlify will automatically build and deploy your application.

## License

MIT
9 changes: 9 additions & 0 deletions examples/kaibanjs-ai-chat/astro.config.mjs
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
import { defineConfig } from "astro/config";
import netlify from "@astrojs/netlify/functions";
import react from "@astrojs/react";

export default defineConfig({
output: "server",
adapter: netlify(),
integrations: [react()],
});
102 changes: 102 additions & 0 deletions examples/kaibanjs-ai-chat/netlify/functions/chat.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
import type { Context } from "@netlify/functions";
import { Agent, Task, Team } from "kaibanjs";

// Create a KaibanJS agent for chat
const chatAgent = new Agent({
name: "Chat Assistant",
role: "Conversational Assistant",
goal: "Provide helpful and accurate responses to user queries",
background: "AI assistant trained to engage in helpful conversations",
// llmConfig: {
// provider: "openai",
// model: "gpt-3.5-turbo",
// apiKey: { openai: process.env.OPENAI_API_KEY },
// maxRetries: 3,
// },
});

export default async (req: Request, context: Context) => {
if (req.method !== "POST") {
return new Response("Method Not Allowed", { status: 405 });
}

try {
const text = await req.text();
const { message } = JSON.parse(text || "{}");

if (!message) {
return new Response("Message is required", { status: 400 });
}

const chatResponseAgent = new Task({
title: "Chat Response",
description: `Responsd to the user's message: ${message}`,
expectedOutput: "A helpful and accurate response to the user's message",
agent: chatAgent,
});

const chatTeam = new Team({
name: "Chat Team",
agents: [chatAgent],
tasks: [chatResponseAgent],
inputs: { message },
env: { OPENAI_API_KEY: process.env.OPENAI_API_KEY },
});

const output = await chatTeam.start({ message });

if (output.status === "FINISHED") {
console.log("\nGenerated Blog Post:");
console.log(output.result);

const { costDetails, llmUsageStats, duration } = output.stats;
console.log("\nStats:");
console.log(`Duration: ${duration} ms`);
console.log(
`Total Token Count: ${
llmUsageStats.inputTokens + llmUsageStats.outputTokens
}`
);
console.log(`Total Cost: $${costDetails.totalCost.toFixed(4)}`);
} else if (output.status === "BLOCKED") {
console.log("Workflow is blocked, unable to complete");
}

return new Response(JSON.stringify(output.result), { status: 200 });

// Create a ReadableStream for the response
// const readableStream = new ReadableStream({
// async start(controller) {
// try {
// // Use KaibanJS agent to process the message with streaming
// const stream = await chatTeam.start({message});

// for await (const chunk of stream) {
// const text = chunk.content || "";
// if (text) {
// controller.enqueue(new TextEncoder().encode(text));
// }
// }
// controller.close();
// } catch (error) {
// console.error("Stream Error:", error);
// controller.error(error);
// }
// },
// });

// Return a Response object with the stream
// return new Response(readableStream, {
// headers: {
// "Content-Type": "text/event-stream",
// "Cache-Control": "no-cache",
// Connection: "keep-alive",
// },
// });
} catch (error) {
console.error("Error:", error);
return new Response(JSON.stringify({ error: "Internal Server Error" }), {
status: 500,
});
}
};
31 changes: 31 additions & 0 deletions examples/kaibanjs-ai-chat/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
{
"name": "kaibanjs-ai-chat",
"version": "1.0.0",
"type": "module",
"scripts": {
"dev": "astro dev",
"start": "astro dev",
"build": "astro build",
"preview": "astro preview",
"astro": "astro"
},
"dependencies": {
"@astrojs/netlify": "6.1.0",
"@astrojs/react": "4.2.0",
"@netlify/functions": "^2.4.1",
"@types/node": "^20.11.16",
"@types/react": "^18.2.48",
"@types/react-dom": "^18.2.18",
"astro": "5.2.5",
"kaibanjs": "^0.14.1",
"openai": "^4.26.0",
"react": "^18.2.0",
"react-dom": "^18.2.0"
},
"devDependencies": {
"typescript": "^5.3.3"
},
"resolutions": {
"string-width": "4.x"
}
}
158 changes: 158 additions & 0 deletions examples/kaibanjs-ai-chat/src/components/Chat.tsx
Original file line number Diff line number Diff line change
@@ -0,0 +1,158 @@
import { useState, useRef, useEffect } from "react";

interface Message {
role: "user" | "assistant";
content: string;
}

export default function Chat() {
const [messages, setMessages] = useState<Message[]>([]);
const [input, setInput] = useState("");
const [isLoading, setIsLoading] = useState(false);
const messagesEndRef = useRef<HTMLDivElement>(null);

const scrollToBottom = () => {
messagesEndRef.current?.scrollIntoView({ behavior: "smooth" });
};

useEffect(() => {
scrollToBottom();
}, [messages]);

const handleSubmit = async (e: React.FormEvent) => {
e.preventDefault();
if (!input.trim() || isLoading) return;

const userMessage = { role: "user" as const, content: input.trim() };
setMessages((prev) => [...prev, userMessage]);
setInput("");
setIsLoading(true);

try {
const response = await fetch("/.netlify/functions/chat", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ message: userMessage.content }),
});

if (!response.ok) throw new Error("Network response was not ok");

const reader = response.body?.getReader();
if (!reader) throw new Error("No reader available");

let assistantMessage = "";
setMessages((prev) => [...prev, { role: "assistant", content: "" }]);

while (true) {
const { done, value } = await reader.read();
if (done) break;

const text = new TextDecoder().decode(value);
assistantMessage += text;
setMessages((prev) => [
...prev.slice(0, -1),
{ role: "assistant", content: assistantMessage },
]);
}
} catch (error) {
console.error("Error:", error);
setMessages((prev) => [
...prev,
{
role: "assistant",
content: "Sorry, there was an error processing your request.",
},
]);
} finally {
setIsLoading(false);
}
};

return (
<div className="chat-container" style={styles.container}>
<div className="messages" style={styles.messages}>
{messages.map((message, index) => (
<div
key={index}
style={{
...styles.message,
...(message.role === "user"
? styles.userMessage
: styles.assistantMessage),
}}>
<strong>{message.role === "user" ? "You: " : "AI: "}</strong>
<span>{message.content}</span>
</div>
))}
<div ref={messagesEndRef} />
</div>
<form onSubmit={handleSubmit} style={styles.form}>
<input
type="text"
value={input}
onChange={(e) => setInput(e.target.value)}
placeholder="Type your message..."
style={styles.input}
disabled={isLoading}
/>
<button type="submit" disabled={isLoading} style={styles.button}>
{isLoading ? "Sending..." : "Send"}
</button>
</form>
</div>
);
}

const styles = {
container: {
display: "flex",
flexDirection: "column" as const,
height: "600px",
border: "1px solid #ddd",
borderRadius: "8px",
background: "#fff",
},
messages: {
flex: 1,
overflowY: "auto" as const,
padding: "1rem",
},
message: {
marginBottom: "1rem",
padding: "0.8rem",
borderRadius: "8px",
maxWidth: "80%",
},
userMessage: {
marginLeft: "auto",
background: "#007bff",
color: "#fff",
},
assistantMessage: {
marginRight: "auto",
background: "#f1f1f1",
color: "#333",
},
form: {
display: "flex",
padding: "1rem",
borderTop: "1px solid #ddd",
gap: "0.5rem",
},
input: {
flex: 1,
padding: "0.5rem",
border: "1px solid #ddd",
borderRadius: "4px",
fontSize: "1rem",
},
button: {
padding: "0.5rem 1rem",
background: "#007bff",
color: "#fff",
border: "none",
borderRadius: "4px",
cursor: "pointer",
fontSize: "1rem",
},
};
Loading