A modern, full-stack AI chatbot built with Rust and Leptos, featuring multi-provider AI support, persistent memory, file uploads, and a beautiful floating chat interface.
- Ollama (local models) - Run AI models locally for privacy
- OpenAI - GPT-4, GPT-3.5-turbo
- Anthropic - Claude 3 models
- Google Gemini - Gemini Pro and Pro Vision
- OpenRouter - Access to 100+ models from various providers
- Floating chatbox like Perplexity
- T3 Chat-style AI suggested questions
- Three-dot thinking animation with reasoning dropdown
- Markdown rendering with syntax highlighting
- Code blocks with copy buttons
- LaTeX support for mathematical expressions
- Cross-chat memory - AI remembers your preferences across all conversations
- User context - Stores name, preferences, and important information
- Smart suggestions - AI generates contextual follow-up questions
- Image uploads - AI can see and analyze images
- PDF processing - Extract and understand PDF content
- Voice input - Speech-to-text functionality
- Multiple file types - Support for various document formats
- Responsive design - Works on desktop and mobile
- Dark/light mode ready
- Smooth animations and transitions
- Modern Tailwind CSS styling
- Frontend & Backend: Leptos (Rust + WASM + SSR)
- AI Integration: rust-genai (multi-provider LLM client)
- Database: SQLite with SQLx
- Styling: Tailwind CSS
- File Processing: image, lopdf, whisper-rs
- Markdown: pulldown-cmark with syntax highlighting
- Rust (latest stable)
- Node.js (for development tools)
- Ollama (optional, for local models)
-
Clone the repository
git clone <repository-url> cd aibot
-
Set up environment variables
cp .env.example .env # Edit .env with your API keys -
Install dependencies
cargo build
-
Run the development server
cargo leptos watch
-
Open your browser Navigate to
http://localhost:3000
Create a .env file in the project root:
# Database
DATABASE_URL=sqlite:./aibot.db
# AI Provider API Keys (optional)
OPENAI_API_KEY=your_openai_api_key
ANTHROPIC_API_KEY=your_anthropic_api_key
GEMINI_API_KEY=your_gemini_api_key
OPENROUTER_API_KEY=your_openrouter_api_key
# Ollama Configuration
OLLAMA_BASE_URL=http://localhost:11434
# Default Settings
DEFAULT_AI_PROVIDER=ollama
DEFAULT_MODEL=llama3.2-
Install Ollama
# macOS/Linux curl -fsSL https://ollama.ai/install.sh | sh # Windows # Download from https://ollama.ai/download
-
Pull a model
ollama pull llama3.2
-
Start Ollama
ollama serve
-
Run the chatbot
cargo leptos watch
- Get API keys from your preferred providers
- Add them to your
.envfile - Select the provider in the model switcher dropdown
- Start chatting!
src/
├── app.rs # Main application component
├── main.rs # Server entry point
├── lib.rs # Library exports
├── models.rs # Data structures
├── database.rs # Database operations
├── ai_service.rs # AI provider integration
├── api.rs # Server functions
└── components/ # UI components
├── chat_box.rs # Main chat interface
├── message.rs # Message display
├── model_switcher.rs # AI provider/model selection
├── file_upload.rs # File upload handling
├── voice_input.rs # Voice input component
├── thinking_animation.rs # Loading animation
└── suggested_questions.rs # AI suggested questions
- Update the
AIProviderenum inmodels.rs - Add provider configuration in
ai_service.rs - Implement client creation in
AIService::new() - Add model list in
get_available_models()
The database is automatically initialized with the required tables. To add new migrations:
- Create a new SQL file in
migrations/ - Update the migration logic in
database.rs
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
- Leptos - Full-stack Rust web framework
- rust-genai - Multi-provider LLM client
- Ollama - Local LLM runner
- Tailwind CSS - Utility-first CSS framework
- User authentication and multi-user support
- Chat history export/import
- Advanced file processing (Excel, Word docs)
- Real-time collaboration
- Mobile app (React Native/Flutter)
- Plugin system for custom integrations
- Advanced memory management
- Voice output (text-to-speech)
- Image generation capabilities
- API for third-party integrations