diff --git a/chat_interface/README.md b/chat_interface/README.md new file mode 100644 index 0000000..a5cc9ef --- /dev/null +++ b/chat_interface/README.md @@ -0,0 +1,343 @@ +# Browser AI Chat Interface + +A conversational interface for the Browser AI library that provides GitHub Copilot-like chat functionality for browser automation. Available as both a web application and Qt desktop application. + +## Features + +### šŸ¤– Conversational Browser Automation +- Chat-based interface for controlling browser automation +- Natural language task descriptions +- Real-time progress updates and logging + +### 🌐 Web Application +- Modern web interface built with Gradio +- Real-time log streaming +- Responsive design for desktop and mobile +- Easy configuration management + +### šŸ–„ļø Desktop Application +- Native Qt desktop interface +- System tray integration +- Customizable themes +- Offline configuration storage + +### āš™ļø LLM Configuration +- Support for multiple LLM providers: + - OpenAI (GPT-4, GPT-3.5) + - Anthropic (Claude) + - Ollama (Local models) + - Google Gemini + - Fireworks AI +- Easy API key management +- Temperature and parameter controls +- Configuration testing + +### šŸ“Š Real-time Monitoring +- Live log streaming from Browser AI +- Animated status indicators +- Task progress tracking +- Error handling and reporting + +## Installation + +### Prerequisites + +```bash +# Install Python dependencies +pip install browser-ai gradio PyQt5 + +# For Playwright browser automation +playwright install +``` + +### Quick Start + +1. **Web Interface:** + ```bash + python launch_web.py + ``` + - Opens at http://localhost:7860 + - Configure LLM settings in the web interface + - Start chatting to control browser automation + +2. **Desktop Interface:** + ```bash + python launch_desktop.py + ``` + - Native Qt application window + - Configure LLM in the settings panel + - Chat interface on the left, logs on the right + +## Configuration + +### LLM Setup + +#### OpenAI +```python +# Add via interface or set environment variable +export OPENAI_API_KEY="your-api-key-here" +``` + +#### Anthropic Claude +```python +# Add via interface or set environment variable +export ANTHROPIC_API_KEY="your-api-key-here" +``` + +#### Ollama (Local) +```bash +# Start Ollama server +ollama serve + +# Pull a model +ollama pull llama3.2 + +# Configure in interface: +# - Provider: ollama +# - Model: llama3.2 +# - Base URL: http://localhost:11434 +``` + +### Application Settings + +Settings are stored in `~/.browser_ai_chat/config.json`: + +```json +{ + "llm_configs": { + "my_openai": { + "provider": "openai", + "model": "gpt-4o-mini", + "api_key": "sk-...", + "temperature": 0.1 + } + }, + "app_config": { + "theme": "light", + "auto_save": true, + "log_level": "info", + "max_history_items": 100 + } +} +``` + +## Usage Examples + +### Basic Tasks + +``` +User: Search for "Python web scraping" on Google +Assistant: šŸ”„ Starting task execution... +[12:34:56] šŸ”µ Step 1: Navigating to Google... +[12:34:58] šŸ”µ Step 2: Searching for "Python web scraping"... +[12:35:02] 🟢 āœ… Task Completed + +Search results for "Python web scraping" are now displayed on Google. +``` + +### E-commerce Automation + +``` +User: Go to Amazon and find the best rated wireless headphones under $100 +Assistant: šŸ”„ Starting task execution... +[12:35:10] šŸ”µ Step 1: Navigating to Amazon.com... +[12:35:12] šŸ”µ Step 2: Searching for wireless headphones... +[12:35:15] šŸ”µ Step 3: Applying price filter under $100... +[12:35:18] šŸ”µ Step 4: Sorting by customer ratings... +[12:35:22] 🟢 āœ… Task Completed + +Found top-rated wireless headphones under $100: +1. Sony WH-CH720N - 4.4/5 stars - $89.99 +2. JBL Tune 760NC - 4.3/5 stars - $79.95 +3. Anker Soundcore Q30 - 4.2/5 stars - $79.99 +``` + +### Form Filling + +``` +User: Fill out the contact form on example.com with my details +Assistant: šŸ”„ Starting task execution... +[12:36:30] šŸ”µ Step 1: Navigating to example.com... +[12:36:32] šŸ”µ Step 2: Locating contact form... +[12:36:34] šŸ”µ Step 3: Filling name field... +[12:36:36] šŸ”µ Step 4: Filling email field... +[12:36:38] šŸ”µ Step 5: Filling message field... +[12:36:40] šŸ”µ Step 6: Submitting form... +[12:36:42] 🟢 āœ… Task Completed + +Contact form successfully submitted with your details. +``` + +## Architecture + +### Event System +The chat interface uses an event-driven architecture to capture real-time updates: + +```python +from chat_interface import LogEventListener + +# Create event listener +listener = LogEventListener() +listener.start_listening() + +# Subscribe to events +listener.subscribe_to_logs(on_log_event) +listener.subscribe_to_tasks(on_task_update) +``` + +### Configuration Management +Centralized configuration system for LLMs and app settings: + +```python +from chat_interface import ConfigManager + +config = ConfigManager() +config.add_llm_config("my_llm", llm_config) +llm_instance = config.create_llm_instance(llm_config) +``` + +### Integration with Browser AI +Seamless integration without modifying core Browser AI library: + +```python +# Agent creation with callbacks +agent = Agent( + task=user_task, + llm=selected_llm, + register_new_step_callback=event_listener.handle_agent_step, + register_done_callback=event_listener.handle_agent_done +) + +# Run with real-time updates +history = await agent.run() +``` + +## Advanced Features + +### Custom Actions +Extend functionality with custom browser actions: + +```python +from browser_ai import Controller + +controller = Controller() + +@controller.action("Take screenshot of current page") +async def screenshot(browser): + # Custom screenshot logic + return ActionResult(extracted_content="Screenshot saved") +``` + +### Task Templates +Save common automation patterns: + +```python +templates = { + "google_search": "Search for '{query}' on Google", + "price_check": "Check price of '{product}' on {website}", + "form_fill": "Fill out contact form with my details" +} +``` + +### Batch Operations +Run multiple tasks in sequence: + +```python +tasks = [ + "Open Gmail and check for new emails", + "Navigate to calendar and check today's appointments", + "Update status on LinkedIn" +] + +for task in tasks: + await run_task(task) +``` + +## Troubleshooting + +### Common Issues + +1. **"No LLM configured" error** + - Add LLM configuration via the interface + - Check API key validity + - Verify network connectivity + +2. **Task execution fails** + - Check browser installation: `playwright install` + - Verify website accessibility + - Review logs for specific errors + +3. **Web interface not loading** + - Check port 7860 is available + - Try different port: `python launch_web.py --server-port 8000` + +4. **Desktop app crashes** + - Install Qt dependencies: `pip install PyQt5` + - Check display settings + - Run from terminal to see error messages + +### Debug Mode + +Enable debug logging: + +```bash +export BROWSER_AI_LOGGING_LEVEL=debug +python launch_web.py +``` + +### Log Files + +Application logs are stored in: +- Web: Browser console and interface logs panel +- Desktop: Application logs panel and console output +- Browser AI: Real-time streaming to both interfaces + +## Contributing + +### Development Setup + +```bash +# Clone repository +git clone https://github.com/Sathursan-S/Browser.AI.git +cd Browser.AI + +# Install dependencies +pip install -e . +pip install gradio PyQt5 + +# Run tests +python -m pytest chat_interface/tests/ +``` + +### Adding New LLM Providers + +1. Add provider to `LLMProvider` enum +2. Implement in `ConfigManager.create_llm_instance()` +3. Add provider-specific models list +4. Update configuration UI + +### Extending UI Features + +1. **Web Interface**: Modify `chat_interface/web_app.py` +2. **Desktop Interface**: Modify `chat_interface/desktop_app.py` +3. **Shared Logic**: Update base classes in respective modules + +## License + +This project extends the Browser AI library. Please refer to the main project license. + +## Support + +- **Issues**: Create GitHub issues for bugs and feature requests +- **Discussions**: Use GitHub Discussions for questions +- **Documentation**: See Browser AI main documentation + +## Changelog + +### v0.1.0 +- Initial release +- Web and desktop chat interfaces +- Multi-LLM provider support +- Real-time log streaming +- Configuration management +- Event-driven architecture \ No newline at end of file diff --git a/chat_interface/__init__.py b/chat_interface/__init__.py new file mode 100644 index 0000000..c0f6828 --- /dev/null +++ b/chat_interface/__init__.py @@ -0,0 +1,13 @@ +""" +Chat Interface for Browser AI Library + +This module provides chat-based interfaces (web and desktop) for the Browser AI library, +allowing users to interact with browser automation through a conversational interface. +""" + +__version__ = "0.1.0" + +from .event_listener import LogEventListener +from .config_manager import ConfigManager + +__all__ = ['LogEventListener', 'ConfigManager'] \ No newline at end of file diff --git a/chat_interface/config_manager.py b/chat_interface/config_manager.py new file mode 100644 index 0000000..a074f56 --- /dev/null +++ b/chat_interface/config_manager.py @@ -0,0 +1,276 @@ +""" +Configuration manager for Browser AI chat interface. + +Handles LLM configuration, API keys, and application settings. +""" + +import json +import os +from typing import Dict, Any, Optional, List +from pathlib import Path +from dataclasses import dataclass, asdict +from enum import Enum + + +class LLMProvider(Enum): + OPENAI = "openai" + ANTHROPIC = "anthropic" + OLLAMA = "ollama" + GOOGLE = "google" + FIREWORKS = "fireworks" + AWS = "aws" + + +@dataclass +class LLMConfig: + """Configuration for LLM providers""" + provider: LLMProvider + model: str + api_key: Optional[str] = None + base_url: Optional[str] = None + temperature: float = 0.1 + max_tokens: Optional[int] = None + timeout: int = 30 + extra_params: Optional[Dict[str, Any]] = None + + +@dataclass +class AppConfig: + """Application configuration""" + theme: str = "light" + auto_save: bool = True + log_level: str = "info" + max_history_items: int = 100 + auto_scroll: bool = True + show_timestamps: bool = True + animate_status: bool = True + + +class ConfigManager: + """ + Manages configuration for the Browser AI chat interface. + Handles LLM settings, API keys, and application preferences. + """ + + def __init__(self, config_dir: Optional[str] = None): + if config_dir is None: + self.config_dir = Path.home() / ".browser_ai_chat" + else: + self.config_dir = Path(config_dir) + + self.config_dir.mkdir(parents=True, exist_ok=True) + self.config_file = self.config_dir / "config.json" + + self._llm_configs: Dict[str, LLMConfig] = {} + self._app_config = AppConfig() + self._load_config() + + def _load_config(self): + """Load configuration from file""" + if self.config_file.exists(): + try: + with open(self.config_file, 'r') as f: + data = json.load(f) + + # Load LLM configurations + llm_configs_data = data.get('llm_configs', {}) + for name, config_data in llm_configs_data.items(): + try: + config_data['provider'] = LLMProvider(config_data['provider']) + self._llm_configs[name] = LLMConfig(**config_data) + except Exception as e: + print(f"Error loading LLM config {name}: {e}") + + # Load app configuration + app_config_data = data.get('app_config', {}) + self._app_config = AppConfig(**app_config_data) + + except Exception as e: + print(f"Error loading config: {e}") + self._create_default_config() + else: + self._create_default_config() + + def _create_default_config(self): + """Create default configuration""" + # Add default Gemini config if API key is available (make it first/default) + google_key = os.getenv('GOOGLE_API_KEY') or os.getenv('GEMINI_API_KEY') + if google_key: + self._llm_configs['gemini_pro'] = LLMConfig( + provider=LLMProvider.GOOGLE, + model="gemini-2.5-flash-lite", + api_key=google_key, + temperature=0.1 + ) + + # Add default OpenAI config if API key is available + openai_key = os.getenv('OPENAI_API_KEY') + if openai_key: + self._llm_configs['openai_gpt4'] = LLMConfig( + provider=LLMProvider.OPENAI, + model="gpt-4o-mini", + api_key=openai_key, + temperature=0.1 + ) + + # Add default Anthropic config if API key is available + anthropic_key = os.getenv('ANTHROPIC_API_KEY') + if anthropic_key: + self._llm_configs['claude'] = LLMConfig( + provider=LLMProvider.ANTHROPIC, + model="claude-3-sonnet-20240229", + api_key=anthropic_key, + temperature=0.1 + ) + + # Add default Ollama config (no API key required) + self._llm_configs['ollama_llama'] = LLMConfig( + provider=LLMProvider.OLLAMA, + model="qwen2.5-coder:0.5b", + base_url="http://localhost:11434", + temperature=0.1 + ) + + self.save_config() + + def save_config(self): + """Save configuration to file""" + try: + # Convert LLMConfigs to serializable format + llm_configs_data = {} + for name, config in self._llm_configs.items(): + config_dict = asdict(config) + config_dict['provider'] = config.provider.value + llm_configs_data[name] = config_dict + + data = { + 'llm_configs': llm_configs_data, + 'app_config': asdict(self._app_config) + } + + with open(self.config_file, 'w') as f: + json.dump(data, f, indent=2) + + except Exception as e: + print(f"Error saving config: {e}") + + def get_llm_configs(self) -> Dict[str, LLMConfig]: + """Get all LLM configurations""" + return self._llm_configs.copy() + + def get_llm_config(self, name: str) -> Optional[LLMConfig]: + """Get specific LLM configuration""" + return self._llm_configs.get(name) + + def add_llm_config(self, name: str, config: LLMConfig): + """Add or update LLM configuration""" + self._llm_configs[name] = config + self.save_config() + + def remove_llm_config(self, name: str) -> bool: + """Remove LLM configuration""" + if name in self._llm_configs: + del self._llm_configs[name] + self.save_config() + return True + return False + + def get_app_config(self) -> AppConfig: + """Get application configuration""" + return self._app_config + + def update_app_config(self, **kwargs): + """Update application configuration""" + for key, value in kwargs.items(): + if hasattr(self._app_config, key): + setattr(self._app_config, key, value) + self.save_config() + + def create_llm_instance(self, config: LLMConfig): + """Create LLM instance from configuration""" + try: + if config.provider == LLMProvider.OPENAI: + from langchain_openai import ChatOpenAI + return ChatOpenAI( + model=config.model, + api_key=config.api_key, + temperature=config.temperature, + max_tokens=config.max_tokens, + timeout=config.timeout, + **(config.extra_params or {}) + ) + + elif config.provider == LLMProvider.ANTHROPIC: + from langchain_anthropic import ChatAnthropic + return ChatAnthropic( + model=config.model, + api_key=config.api_key, + temperature=config.temperature, + max_tokens=config.max_tokens, + timeout=config.timeout, + **(config.extra_params or {}) + ) + + elif config.provider == LLMProvider.OLLAMA: + from langchain_ollama import ChatOllama + return ChatOllama( + model=config.model, + base_url=config.base_url or "http://localhost:11434", + temperature=config.temperature, + **(config.extra_params or {}) + ) + + elif config.provider == LLMProvider.GOOGLE: + from langchain_google_genai import ChatGoogleGenerativeAI + return ChatGoogleGenerativeAI( + model=config.model, + api_key=config.api_key, + temperature=config.temperature, + max_tokens=config.max_tokens, + timeout=config.timeout, + **(config.extra_params or {}) + ) + + else: + raise ValueError(f"Unsupported LLM provider: {config.provider}") + + except ImportError as e: + raise ImportError(f"Required package not installed for {config.provider}: {e}") + except Exception as e: + raise Exception(f"Error creating LLM instance: {e}") + + def test_llm_config(self, config: LLMConfig) -> bool: + """Test LLM configuration by making a simple request""" + try: + llm = self.create_llm_instance(config) + response = llm.invoke("Say 'Hello' if you can receive this message.") + return "hello" in response.content.lower() + except Exception as e: + print(f"LLM test failed: {e}") + return False + + def get_available_models(self, provider: LLMProvider) -> List[str]: + """Get list of available models for a provider""" + models = { + LLMProvider.OPENAI: [ + "gpt-4o", "gpt-4o-mini", "gpt-4-turbo", "gpt-4", + "gpt-3.5-turbo", "gpt-3.5-turbo-16k" + ], + LLMProvider.ANTHROPIC: [ + "claude-3-5-sonnet-20241022", "claude-3-sonnet-20240229", + "claude-3-haiku-20240307", "claude-3-opus-20240229" + ], + LLMProvider.OLLAMA: [ + "llama3.2", "llama3.1", "llama3", "llama2", "codellama", + "mistral", "mixtral", "phi3", "qwen2", "gemma2" + ], + LLMProvider.GOOGLE: [ + "gemini-2.5-flash-lite" + ], + LLMProvider.FIREWORKS: [ + "accounts/fireworks/models/llama-v3p1-70b-instruct", + "accounts/fireworks/models/llama-v3p1-8b-instruct" + ] + } + + return models.get(provider, []) \ No newline at end of file diff --git a/chat_interface/desktop_app.py b/chat_interface/desktop_app.py new file mode 100644 index 0000000..75c16be --- /dev/null +++ b/chat_interface/desktop_app.py @@ -0,0 +1,747 @@ +""" +Qt desktop chat interface for Browser AI. + +Provides a GitHub Copilot-like desktop chat interface for browser automation. +""" + +import sys +import asyncio +import json +import uuid +from datetime import datetime +from typing import List, Dict, Any, Optional +import threading + +from PySide6.QtWidgets import ( + QApplication, QMainWindow, QWidget, QVBoxLayout, QHBoxLayout, + QTextEdit, QLineEdit, QPushButton, QSplitter, QTabWidget, + QComboBox, QLabel, QGroupBox, QFormLayout, QSlider, + QScrollArea, QFrame, QProgressBar, QStatusBar, QMessageBox, + QDialog, QDialogButtonBox, QSpinBox, QCheckBox +) +from PySide6.QtCore import ( + Qt, QTimer, Signal, QObject, QThread, Slot, + QPropertyAnimation, QEasingCurve +) +from PySide6.QtGui import QFont, QTextCursor, QPalette, QColor + +from browser_ai import Agent, Browser + +from .event_listener import LogEventListener, TaskStatus, LogEvent, TaskUpdate +from .config_manager import ConfigManager, LLMConfig, LLMProvider + + +class ChatMessage(QWidget): + """Custom widget for chat messages""" + + def __init__(self, message: str, is_user: bool = True, parent=None): + super().__init__(parent) + self.is_user = is_user + self.setup_ui(message) + + def setup_ui(self, message: str): + layout = QHBoxLayout(self) + + # Create message container + message_frame = QFrame() + message_frame.setFrameStyle(QFrame.Box) + message_layout = QVBoxLayout(message_frame) + + # Style based on sender + if self.is_user: + message_frame.setStyleSheet(""" + QFrame { + background-color: #e3f2fd; + border: 1px solid #1976d2; + border-radius: 10px; + margin: 5px; + padding: 8px; + } + """) + layout.addStretch() + layout.addWidget(message_frame) + else: + message_frame.setStyleSheet(""" + QFrame { + background-color: #f5f5f5; + border: 1px solid #616161; + border-radius: 10px; + margin: 5px; + padding: 8px; + } + """) + layout.addWidget(message_frame) + layout.addStretch() + + # Add message text + message_text = QTextEdit() + message_text.setPlainText(message) + message_text.setReadOnly(True) + message_text.setMaximumHeight(100) + message_text.setVerticalScrollBarPolicy(Qt.ScrollBarAsNeeded) + message_text.setHorizontalScrollBarPolicy(Qt.ScrollBarAlwaysOff) + + message_layout.addWidget(message_text) + + +class LogWidget(QWidget): + """Custom widget for displaying logs""" + + def __init__(self, parent=None): + super().__init__(parent) + self.setup_ui() + self.logs = [] + + def setup_ui(self): + layout = QVBoxLayout(self) + + # Logs display + self.log_text = QTextEdit() + self.log_text.setReadOnly(True) + self.log_text.setFont(QFont("Consolas", 9)) + self.log_text.setStyleSheet(""" + QTextEdit { + background-color: #1e1e1e; + color: #ffffff; + border: 1px solid #333333; + } + """) + + layout.addWidget(QLabel("Real-time Logs:")) + layout.addWidget(self.log_text) + + def add_log(self, message: str): + """Add new log message""" + self.logs.append(message) + if len(self.logs) > 100: # Keep only last 100 logs + self.logs.pop(0) + + # Update display + self.log_text.setPlainText("\n".join(self.logs)) + + # Scroll to bottom + cursor = self.log_text.textCursor() + cursor.movePosition(QTextCursor.End) + self.log_text.setTextCursor(cursor) + + +class StatusWidget(QWidget): + """Widget for displaying task status""" + + def __init__(self, parent=None): + super().__init__(parent) + self.setup_ui() + + def setup_ui(self): + layout = QHBoxLayout(self) + + self.status_label = QLabel("Status: Idle") + self.status_label.setStyleSheet(""" + QLabel { + padding: 8px; + background-color: #f0f0f0; + border: 1px solid #ccc; + border-radius: 5px; + font-weight: bold; + } + """) + + self.progress_bar = QProgressBar() + self.progress_bar.setVisible(False) + + layout.addWidget(self.status_label) + layout.addWidget(self.progress_bar) + layout.addStretch() + + def update_status(self, status: str, show_progress: bool = False): + """Update status display""" + self.status_label.setText(f"Status: {status}") + self.progress_bar.setVisible(show_progress) + + # Color coding + if "Running" in status: + self.status_label.setStyleSheet(""" + QLabel { + padding: 8px; + background-color: #e3f2fd; + border: 1px solid #1976d2; + border-radius: 5px; + font-weight: bold; + color: #1976d2; + } + """) + elif "Completed" in status: + self.status_label.setStyleSheet(""" + QLabel { + padding: 8px; + background-color: #e8f5e8; + border: 1px solid #4caf50; + border-radius: 5px; + font-weight: bold; + color: #4caf50; + } + """) + elif "Failed" in status: + self.status_label.setStyleSheet(""" + QLabel { + padding: 8px; + background-color: #ffebee; + border: 1px solid #f44336; + border-radius: 5px; + font-weight: bold; + color: #f44336; + } + """) + else: + self.status_label.setStyleSheet(""" + QLabel { + padding: 8px; + background-color: #f0f0f0; + border: 1px solid #ccc; + border-radius: 5px; + font-weight: bold; + } + """) + + +class LLMConfigDialog(QDialog): + """Dialog for adding/editing LLM configurations""" + + def __init__(self, config_manager: ConfigManager, parent=None): + super().__init__(parent) + self.config_manager = config_manager + self.setup_ui() + + def setup_ui(self): + self.setWindowTitle("Add LLM Configuration") + self.setFixedSize(500, 400) + + layout = QVBoxLayout(self) + + # Form layout + form_layout = QFormLayout() + + self.name_edit = QLineEdit() + self.name_edit.setPlaceholderText("e.g., 'my_openai'") + form_layout.addRow("Configuration Name:", self.name_edit) + + self.provider_combo = QComboBox() + self.provider_combo.addItems(["google", "openai", "anthropic", "ollama"]) + self.provider_combo.currentTextChanged.connect(self.on_provider_changed) + form_layout.addRow("Provider:", self.provider_combo) + + self.model_edit = QLineEdit() + self.model_edit.setPlaceholderText("e.g., 'gemini-2.5-flash-lite'") + form_layout.addRow("Model:", self.model_edit) + + # Set Google as default and trigger placeholder update + self.provider_combo.setCurrentText("google") + self.on_provider_changed("google") + + self.api_key_edit = QLineEdit() + self.api_key_edit.setEchoMode(QLineEdit.Password) + self.api_key_edit.setPlaceholderText("Enter API key (leave empty for Ollama)") + form_layout.addRow("API Key:", self.api_key_edit) + + self.base_url_edit = QLineEdit() + self.base_url_edit.setPlaceholderText("http://localhost:11434 (for Ollama)") + form_layout.addRow("Base URL:", self.base_url_edit) + + self.temperature_slider = QSlider(Qt.Horizontal) + self.temperature_slider.setRange(0, 20) + self.temperature_slider.setValue(1) + self.temperature_label = QLabel("0.1") + self.temperature_slider.valueChanged.connect( + lambda v: self.temperature_label.setText(str(v / 10.0)) + ) + + temp_layout = QHBoxLayout() + temp_layout.addWidget(self.temperature_slider) + temp_layout.addWidget(self.temperature_label) + form_layout.addRow("Temperature:", temp_layout) + + layout.addLayout(form_layout) + + # Buttons + button_box = QDialogButtonBox( + QDialogButtonBox.Ok | QDialogButtonBox.Cancel + ) + button_box.accepted.connect(self.accept) + button_box.rejected.connect(self.reject) + layout.addWidget(button_box) + + def get_config(self) -> Optional[LLMConfig]: + """Get LLM configuration from form""" + if not self.name_edit.text().strip(): + return None + + try: + provider = LLMProvider(self.provider_combo.currentText().lower()) + except ValueError: + return None + + return LLMConfig( + provider=provider, + model=self.model_edit.text().strip(), + api_key=self.api_key_edit.text().strip() or None, + base_url=self.base_url_edit.text().strip() or None, + temperature=self.temperature_slider.value() / 10.0 + ) + + def get_name(self) -> str: + """Get configuration name""" + return self.name_edit.text().strip() + + def on_provider_changed(self, provider_text: str): + """Update model placeholder when provider changes""" + placeholders = { + "google": "e.g., 'gemini-2.5-flash-lite'", + "openai": "e.g., 'gpt-4o-mini'", + "anthropic": "e.g., 'claude-3-sonnet-20240229'", + "ollama": "e.g., 'qwen2.5-coder:0.5b'" + } + self.model_edit.setPlaceholderText(placeholders.get(provider_text.lower(), "Enter model name")) + + +class DesktopChatInterface(QMainWindow): + """Qt desktop chat interface for Browser AI""" + + # Signal for thread-safe UI updates + chat_message_signal = Signal(str, bool) # message, is_user + log_message_signal = Signal(str) # message + + def __init__(self): + super().__init__() + self.config_manager = ConfigManager() + self.event_listener = LogEventListener() + self.current_agent: Optional[Agent] = None + self.current_browser: Optional[Browser] = None + self.current_task_id: Optional[str] = None + self.running_task = False + + # Connect signal to slot + self.chat_message_signal.connect(self.add_chat_message) + self.log_message_signal.connect(self.add_log_safe) + + # Start event listener + self.event_listener.start_listening() + self.event_listener.subscribe_to_logs(self.on_log_event) + self.event_listener.subscribe_to_tasks(self.on_task_update) + + self.setup_ui() + self.setup_timers() + + def setup_ui(self): + """Setup the user interface""" + self.setWindowTitle("Browser AI Chat Interface") + self.setGeometry(100, 100, 1200, 800) + + # Central widget + central_widget = QWidget() + self.setCentralWidget(central_widget) + + # Main layout + main_layout = QHBoxLayout(central_widget) + + # Create splitter + splitter = QSplitter(Qt.Horizontal) + main_layout.addWidget(splitter) + + # Left panel - Chat + chat_panel = self.create_chat_panel() + splitter.addWidget(chat_panel) + + # Right panel - Configuration and logs + right_panel = self.create_right_panel() + splitter.addWidget(right_panel) + + # Set splitter sizes + splitter.setSizes([700, 500]) + + # Status bar + self.statusBar().showMessage("Ready") + + def create_chat_panel(self) -> QWidget: + """Create chat panel""" + panel = QWidget() + layout = QVBoxLayout(panel) + + # Title + title_label = QLabel("šŸ¤– Browser AI Chat") + title_label.setFont(QFont("Arial", 16, QFont.Bold)) + title_label.setAlignment(Qt.AlignCenter) + layout.addWidget(title_label) + + # Chat display + self.chat_scroll = QScrollArea() + self.chat_scroll.setWidgetResizable(True) + self.chat_scroll.setVerticalScrollBarPolicy(Qt.ScrollBarAsNeeded) + self.chat_scroll.setHorizontalScrollBarPolicy(Qt.ScrollBarAlwaysOff) + + self.chat_container = QWidget() + self.chat_layout = QVBoxLayout(self.chat_container) + self.chat_layout.addStretch() + + self.chat_scroll.setWidget(self.chat_container) + layout.addWidget(self.chat_scroll) + + # Input area + input_layout = QHBoxLayout() + + self.message_input = QLineEdit() + self.message_input.setPlaceholderText( + "Describe what you want to do (e.g., 'Search for Python tutorials on Google')" + ) + self.message_input.returnPressed.connect(self.send_message) + + self.send_button = QPushButton("Send") + self.send_button.clicked.connect(self.send_message) + + self.stop_button = QPushButton("Stop") + self.stop_button.clicked.connect(self.stop_task) + self.stop_button.setEnabled(False) # Initially disabled + self.stop_button.setStyleSheet(""" + QPushButton { + background-color: #dc3545; + color: white; + border: none; + padding: 8px 16px; + border-radius: 4px; + font-weight: bold; + } + QPushButton:hover { + background-color: #c82333; + } + QPushButton:disabled { + background-color: #6c757d; + color: #adb5bd; + } + """) + + input_layout.addWidget(self.message_input) + input_layout.addWidget(self.send_button) + input_layout.addWidget(self.stop_button) + + layout.addLayout(input_layout) + + return panel + + def create_right_panel(self) -> QWidget: + """Create right panel with configuration and logs""" + panel = QWidget() + layout = QVBoxLayout(panel) + + # Configuration section + config_group = QGroupBox("āš™ļø Configuration") + config_layout = QVBoxLayout(config_group) + + # LLM selection + llm_layout = QHBoxLayout() + llm_layout.addWidget(QLabel("LLM:")) + + self.llm_combo = QComboBox() + self.update_llm_combo() + llm_layout.addWidget(self.llm_combo) + + refresh_btn = QPushButton("šŸ”„") + refresh_btn.setFixedSize(30, 30) + refresh_btn.clicked.connect(self.update_llm_combo) + llm_layout.addWidget(refresh_btn) + + config_layout.addLayout(llm_layout) + + # Add LLM config button + add_llm_btn = QPushButton("Add LLM Configuration") + add_llm_btn.clicked.connect(self.show_llm_config_dialog) + config_layout.addWidget(add_llm_btn) + + # Status widget + self.status_widget = StatusWidget() + config_layout.addWidget(self.status_widget) + + layout.addWidget(config_group) + + # Logs section + self.log_widget = LogWidget() + layout.addWidget(self.log_widget) + + return panel + + def setup_timers(self): + """Setup update timers""" + # Timer for updating logs and status + self.update_timer = QTimer() + self.update_timer.timeout.connect(self.update_ui) + self.update_timer.start(1000) # Update every second + + @Slot(LogEvent) + def on_log_event(self, event: LogEvent): + """Handle log events""" + timestamp = event.timestamp.strftime("%H:%M:%S") + status_icon = { + TaskStatus.IDLE: "⚪", + TaskStatus.RUNNING: "šŸ”µ", + TaskStatus.PAUSED: "🟔", + TaskStatus.COMPLETED: "🟢", + TaskStatus.FAILED: "šŸ”“" + }.get(event.task_status, "⚪") + + log_message = f"[{timestamp}] {status_icon} {event.message}" + self.log_message_signal.emit(log_message) + + @Slot(TaskUpdate) + def on_task_update(self, update: TaskUpdate): + """Handle task updates""" + if update.status == TaskStatus.COMPLETED: + self.running_task = False + if update.result: + self.add_chat_message(f"āœ… Task Completed\n\n{update.result}", is_user=False) + self.status_widget.update_status("Completed") + # Reset button states + self.stop_button.setEnabled(False) + self.send_button.setEnabled(True) + self.message_input.setEnabled(True) + + elif update.status == TaskStatus.FAILED: + self.running_task = False + error_msg = update.error or "Unknown error occurred" + self.add_chat_message(f"āŒ Task Failed\n\n{error_msg}", is_user=False) + self.status_widget.update_status("Failed") + # Reset button states + self.stop_button.setEnabled(False) + self.send_button.setEnabled(True) + self.message_input.setEnabled(True) + + elif update.status == TaskStatus.RUNNING: + self.status_widget.update_status(f"Running (Step {update.step_number})", True) + + def update_ui(self): + """Update UI elements""" + if self.running_task: + if not self.status_widget.progress_bar.isVisible(): + self.status_widget.update_status("Running", True) + else: + if self.status_widget.progress_bar.isVisible(): + self.status_widget.update_status("Idle") + + def update_llm_combo(self): + """Update LLM configuration combo box""" + configs = list(self.config_manager.get_llm_configs().keys()) + self.llm_combo.clear() + + if configs: + self.llm_combo.addItems(configs) + else: + self.llm_combo.addItem("No LLM configured") + + def show_llm_config_dialog(self): + """Show LLM configuration dialog""" + dialog = LLMConfigDialog(self.config_manager, self) + + if dialog.exec() == QDialog.Accepted: + config = dialog.get_config() + name = dialog.get_name() + + if config and name: + # Test configuration + if self.config_manager.test_llm_config(config): + self.config_manager.add_llm_config(name, config) + self.update_llm_combo() + QMessageBox.information( + self, "Success", + f"LLM configuration '{name}' added successfully!" + ) + else: + QMessageBox.warning( + self, "Error", + "Failed to connect to LLM. Please check your configuration." + ) + + def add_log_safe(self, message: str): + """Thread-safe method to add log message""" + self.log_widget.add_log(message) + + def add_chat_message_safe(self, message: str, is_user: bool = True): + """Thread-safe method to add chat message""" + self.chat_message_signal.emit(message, is_user) + + def add_chat_message(self, message: str, is_user: bool = True): + """Add message to chat display""" + # Remove stretch to add message + self.chat_layout.takeAt(self.chat_layout.count() - 1) + + # Add message widget + message_widget = ChatMessage(message, is_user) + self.chat_layout.addWidget(message_widget) + + # Add stretch back + self.chat_layout.addStretch() + + # Scroll to bottom + QTimer.singleShot(100, self.scroll_to_bottom) + + def scroll_to_bottom(self): + """Scroll chat to bottom""" + scrollbar = self.chat_scroll.verticalScrollBar() + scrollbar.setValue(scrollbar.maximum()) + + def send_message(self): + """Send chat message""" + message = self.message_input.text().strip() + if not message: + return + + if self.running_task: + QMessageBox.warning( + self, "Task Running", + "A task is already running. Please wait for it to complete." + ) + return + + llm_config_name = self.llm_combo.currentText() + if llm_config_name == "No LLM configured": + QMessageBox.warning( + self, "No LLM", + "Please configure an LLM first." + ) + return + + # Add user message to chat + self.add_chat_message(message, is_user=True) + self.message_input.clear() + + # Start task execution + self.run_task_async(message, llm_config_name) + + # Enable stop button and disable send button and input + self.stop_button.setEnabled(True) + self.send_button.setEnabled(False) + self.message_input.setEnabled(False) + + def stop_task(self): + """Stop the currently running task""" + if not self.running_task or not self.current_agent: + return + + try: + # Stop the agent + self.current_agent.stop() + + # Update UI + self.chat_message_signal.emit("ā¹ļø Task stopped by user", False) + + # Reset state + self.running_task = False + self.current_task_id = None + + # Update button states + self.stop_button.setEnabled(False) + self.send_button.setEnabled(True) + self.message_input.setEnabled(True) + + except Exception as e: + self.chat_message_signal.emit(f"āŒ Error stopping task: {str(e)}", False) + + def run_task_async(self, task: str, llm_config_name: str): + """Run task asynchronously""" + self.running_task = True + self.current_task_id = str(uuid.uuid4()) + + def run_task(): + try: + loop = asyncio.new_event_loop() + asyncio.set_event_loop(loop) + result = loop.run_until_complete( + self.run_browser_task(task, llm_config_name) + ) + # Task completed, result will be handled by task update callback + except Exception as e: + # Use thread-safe method to update UI + self.chat_message_signal.emit(f"āŒ Error: {str(e)}", False) + self.running_task = False + # Reset button states on error + self.stop_button.setEnabled(False) + self.send_button.setEnabled(True) + self.message_input.setEnabled(True) + + # Start task in background thread + thread = threading.Thread(target=run_task, daemon=True) + thread.start() + + # Add placeholder message using thread-safe method + self.chat_message_signal.emit("šŸ”„ Starting task execution...", False) + + async def run_browser_task(self, task: str, llm_config_name: str) -> str: + """Run browser automation task""" + try: + # Set task status + self.event_listener.set_task_status( + self.current_task_id, + TaskStatus.RUNNING, + 0 + ) + + # Get LLM config + llm_config = self.config_manager.get_llm_config(llm_config_name) + if not llm_config: + raise ValueError(f"LLM configuration '{llm_config_name}' not found") + + # Create LLM instance + llm = self.config_manager.create_llm_instance(llm_config) + + # Create browser if not exists + if not self.current_browser: + self.current_browser = Browser() + + # Create agent + self.current_agent = Agent( + task=task, + llm=llm, + browser=self.current_browser, + register_new_step_callback=self.event_listener.handle_agent_step, + register_done_callback=self.event_listener.handle_agent_done, + generate_gif=False + ) + + # Run task + history = await self.current_agent.run(max_steps=20) + + # Get result + if history.history and len(history.history) > 0: + last_item = history.history[-1] + if last_item.result and len(last_item.result) > 0: + final_result = last_item.result[-1] + if final_result.is_done: + return f"āœ… Task completed successfully!\n\n{final_result.extracted_content}" + elif final_result.error: + return f"āŒ Task failed: {final_result.error}" + + return "āœ… Task execution completed" + + except Exception as e: + self.event_listener.set_task_status( + self.current_task_id or "unknown", + TaskStatus.FAILED, + 0 + ) + raise e + finally: + self.running_task = False + + +def main(): + """Main entry point for desktop app""" + app = QApplication(sys.argv) + + # Set application style + app.setStyle("Fusion") + + # Create and show main window + window = DesktopChatInterface() + window.show() + + sys.exit(app.exec()) + + +if __name__ == "__main__": + main() \ No newline at end of file diff --git a/chat_interface/event_listener.py b/chat_interface/event_listener.py new file mode 100644 index 0000000..19c3c6d --- /dev/null +++ b/chat_interface/event_listener.py @@ -0,0 +1,312 @@ +""" +Event listener adapter for Browser AI logging system. + +This module provides real-time log streaming capabilities by hooking into +the existing Browser AI logging infrastructure without modifying core library code. +""" + +import asyncio +import logging +import queue +import threading +from typing import Dict, List, Callable, Any, Optional +from datetime import datetime +from dataclasses import dataclass +from enum import Enum + +from browser_ai.agent.views import ActionResult, AgentHistory +from browser_ai.browser.views import BrowserState + + +class LogLevel(Enum): + DEBUG = "debug" + INFO = "info" + WARNING = "warning" + ERROR = "error" + RESULT = "result" + + +class TaskStatus(Enum): + IDLE = "idle" + RUNNING = "running" + PAUSED = "paused" + COMPLETED = "completed" + FAILED = "failed" + + +@dataclass +class LogEvent: + """Represents a log event with metadata""" + timestamp: datetime + level: LogLevel + message: str + source: str + step_number: Optional[int] = None + task_status: TaskStatus = TaskStatus.IDLE + metadata: Optional[Dict[str, Any]] = None + + +@dataclass +class TaskUpdate: + """Represents a task progress update""" + task_id: str + status: TaskStatus + step_number: int + total_steps: Optional[int] + current_action: Optional[str] + result: Optional[str] + error: Optional[str] + timestamp: datetime + + +class LogEventListener: + """ + Event listener that hooks into Browser AI logging system to capture + real-time events for streaming to chat interfaces. + """ + + def __init__(self, max_events: int = 1000): + self.max_events = max_events + self._events: queue.Queue = queue.Queue(maxsize=max_events) + self._subscribers: List[Callable[[LogEvent], None]] = [] + self._task_subscribers: List[Callable[[TaskUpdate], None]] = [] + self._logger_handler: Optional[logging.Handler] = None + self._current_task_id: Optional[str] = None + self._current_step: int = 0 + self._task_status: TaskStatus = TaskStatus.IDLE + self._lock = threading.Lock() + + def start_listening(self): + """Start listening to Browser AI logs""" + # Create custom handler for browser_ai logger + self._logger_handler = LogCaptureHandler(self) + + # Get the browser_ai logger and add our handler + browser_ai_logger = logging.getLogger('browser_ai') + browser_ai_logger.addHandler(self._logger_handler) + browser_ai_logger.setLevel(logging.DEBUG) + + def stop_listening(self): + """Stop listening to logs""" + if self._logger_handler: + browser_ai_logger = logging.getLogger('browser_ai') + browser_ai_logger.removeHandler(self._logger_handler) + self._logger_handler = None + + def subscribe_to_logs(self, callback: Callable[[LogEvent], None]): + """Subscribe to log events""" + with self._lock: + self._subscribers.append(callback) + + def subscribe_to_tasks(self, callback: Callable[[TaskUpdate], None]): + """Subscribe to task progress updates""" + with self._lock: + self._task_subscribers.append(callback) + + def unsubscribe_from_logs(self, callback: Callable[[LogEvent], None]): + """Unsubscribe from log events""" + with self._lock: + if callback in self._subscribers: + self._subscribers.remove(callback) + + def unsubscribe_from_tasks(self, callback: Callable[[TaskUpdate], None]): + """Unsubscribe from task updates""" + with self._lock: + if callback in self._task_subscribers: + self._task_subscribers.remove(callback) + + def _emit_log_event(self, event: LogEvent): + """Emit log event to all subscribers""" + try: + # Add to queue + if not self._events.full(): + self._events.put_nowait(event) + else: + # Remove oldest event to make space + try: + self._events.get_nowait() + self._events.put_nowait(event) + except queue.Empty: + pass + + # Notify subscribers + with self._lock: + for callback in self._subscribers: + try: + callback(event) + except Exception as e: + print(f"Error in log subscriber callback: {e}") + except Exception as e: + print(f"Error emitting log event: {e}") + + def _emit_task_update(self, update: TaskUpdate): + """Emit task update to all subscribers""" + with self._lock: + for callback in self._task_subscribers: + try: + callback(update) + except Exception as e: + print(f"Error in task subscriber callback: {e}") + + def get_recent_events(self, count: int = 50) -> List[LogEvent]: + """Get recent log events""" + events = [] + temp_queue = queue.Queue() + + # Move events to temp queue while collecting them + while not self._events.empty() and len(events) < count: + try: + event = self._events.get_nowait() + events.append(event) + temp_queue.put(event) + except queue.Empty: + break + + # Put events back + while not temp_queue.empty(): + try: + self._events.put_nowait(temp_queue.get_nowait()) + except queue.Full: + break + + return list(reversed(events)) # Most recent first + + def set_task_status(self, task_id: str, status: TaskStatus, step_number: int = 0): + """Update task status and emit update""" + self._current_task_id = task_id + self._task_status = status + self._current_step = step_number + + update = TaskUpdate( + task_id=task_id, + status=status, + step_number=step_number, + total_steps=None, + current_action=None, + result=None, + error=None, + timestamp=datetime.now() + ) + self._emit_task_update(update) + + def handle_agent_step(self, state: 'BrowserState', output: Any, step: int): + """Handle agent step callback""" + self._current_step = step + + # Create log event for step + event = LogEvent( + timestamp=datetime.now(), + level=LogLevel.INFO, + message=f"Step {step}: {getattr(output.current_state, 'next_goal', 'Processing...') if output and hasattr(output, 'current_state') else 'Processing...'}", + source="agent", + step_number=step, + task_status=TaskStatus.RUNNING, + metadata={ + 'url': state.url if state else None, + 'title': state.title if state else None, + 'action_count': len(getattr(output, 'action', [])) if output else 0 + } + ) + self._emit_log_event(event) + + # Emit task update + if self._current_task_id: + update = TaskUpdate( + task_id=self._current_task_id, + status=TaskStatus.RUNNING, + step_number=step, + total_steps=None, + current_action=getattr(output.current_state, 'next_goal', 'Processing...') if output and hasattr(output, 'current_state') else 'Processing...', + result=None, + error=None, + timestamp=datetime.now() + ) + self._emit_task_update(update) + + def handle_agent_done(self, history: List[AgentHistory]): + """Handle agent completion callback""" + if history and len(history) > 0: + last_item = history[-1] + if last_item.result and len(last_item.result) > 0: + last_result = last_item.result[-1] + if last_result.is_done: + # Task completed successfully + event = LogEvent( + timestamp=datetime.now(), + level=LogLevel.RESULT, + message=f"Task completed: {last_result.extracted_content}", + source="agent", + step_number=self._current_step, + task_status=TaskStatus.COMPLETED, + metadata={'result': last_result.extracted_content} + ) + self._emit_log_event(event) + + if self._current_task_id: + update = TaskUpdate( + task_id=self._current_task_id, + status=TaskStatus.COMPLETED, + step_number=self._current_step, + total_steps=None, + current_action=None, + result=last_result.extracted_content, + error=None, + timestamp=datetime.now() + ) + self._emit_task_update(update) + + +class LogCaptureHandler(logging.Handler): + """Custom logging handler to capture Browser AI logs""" + + def __init__(self, event_listener: LogEventListener): + super().__init__() + self.event_listener = event_listener + self.setFormatter(logging.Formatter('%(message)s')) + + def emit(self, record): + """Emit log record as LogEvent""" + try: + # Map logging levels to our LogLevel enum + level_mapping = { + logging.DEBUG: LogLevel.DEBUG, + logging.INFO: LogLevel.INFO, + logging.WARNING: LogLevel.WARNING, + logging.ERROR: LogLevel.ERROR, + 35: LogLevel.RESULT, # Custom RESULT level + } + + level = level_mapping.get(record.levelno, LogLevel.INFO) + + # Extract step number from message if present + step_number = None + message = self.format(record) + + # Look for step pattern in message + if "Step " in message: + try: + step_part = message.split("Step ")[1].split(":")[0].split(" ")[0] + step_number = int(step_part) + except (IndexError, ValueError): + pass + + # Create log event + event = LogEvent( + timestamp=datetime.fromtimestamp(record.created), + level=level, + message=message, + source=record.name.replace('browser_ai.', ''), + step_number=step_number, + task_status=self.event_listener._task_status, + metadata={ + 'module': record.module, + 'function': record.funcName, + 'line': record.lineno + } + ) + + self.event_listener._emit_log_event(event) + + except Exception as e: + # Don't let logging errors break the application + print(f"Error in log capture handler: {e}") \ No newline at end of file diff --git a/chat_interface/web_app.py b/chat_interface/web_app.py new file mode 100644 index 0000000..e2fb302 --- /dev/null +++ b/chat_interface/web_app.py @@ -0,0 +1,444 @@ +""" +Web-based chat interface for Browser AI using Gradio. + +Provides a GitHub Copilot-like chat interface for browser automation. +""" + +import asyncio +import json +import uuid +from datetime import datetime +from typing import List, Dict, Any, Optional, Tuple +import threading +import time + +import gradio as gr +from browser_ai import Agent, Browser + +from .event_listener import LogEventListener, TaskStatus, LogEvent, TaskUpdate +from .config_manager import ConfigManager, LLMConfig, LLMProvider + + +class WebChatInterface: + """Web-based chat interface for Browser AI""" + + def __init__(self): + self.config_manager = ConfigManager() + self.event_listener = LogEventListener() + self.current_agent: Optional[Agent] = None + self.current_browser: Optional[Browser] = None + self.task_history: List[Dict[str, Any]] = [] + self.current_task_id: Optional[str] = None + self.running_task = False + + # Start event listener + self.event_listener.start_listening() + + # Subscribe to events for UI updates + self.chat_messages: List[Dict[str, str]] = [] # List of message dicts with role/content + self.log_buffer: List[str] = [] + + self.event_listener.subscribe_to_logs(self._on_log_event) + self.event_listener.subscribe_to_tasks(self._on_task_update) + + def _on_log_event(self, event: LogEvent): + """Handle log events from the event listener""" + timestamp = event.timestamp.strftime("%H:%M:%S") + status_icon = { + TaskStatus.IDLE: "⚪", + TaskStatus.RUNNING: "šŸ”µ", + TaskStatus.PAUSED: "🟔", + TaskStatus.COMPLETED: "🟢", + TaskStatus.FAILED: "šŸ”“" + }.get(event.task_status, "⚪") + + log_message = f"[{timestamp}] {status_icon} {event.message}" + self.log_buffer.append(log_message) + + # Keep only last 100 log messages + if len(self.log_buffer) > 100: + self.log_buffer.pop(0) + + def _on_task_update(self, update: TaskUpdate): + """Handle task updates from the event listener""" + if update.status in [TaskStatus.COMPLETED, TaskStatus.FAILED]: + self.running_task = False + if update.status == TaskStatus.COMPLETED: + if update.result: + self.chat_messages.append({"role": "assistant", "content": f"āœ… **Task Completed**\n\n{update.result}"}) + else: # FAILED + error_msg = update.error or "Unknown error occurred" + self.chat_messages.append({"role": "assistant", "content": f"āŒ **Task Failed**\n\n{error_msg}"}) + + def stop_task(self) -> str: + """Stop the currently running task""" + if not self.running_task or not self.current_agent: + return "āš ļø No task is currently running" + + try: + # Stop the agent + self.current_agent.stop() + + # Update state + self.running_task = False + self.current_task_id = None + + # Add stop message to chat + self.chat_messages.append({"role": "assistant", "content": "ā¹ļø **Task stopped by user**"}) + + return "āœ… Task stopped successfully" + + except Exception as e: + return f"āŒ Error stopping task: {str(e)}" + + def _get_available_llm_configs(self) -> List[str]: + """Get list of available LLM configurations""" + configs = self.config_manager.get_llm_configs() + return list(configs.keys()) if configs else ["No LLM configured"] + + def _create_agent(self, llm_config_name: str, task: str) -> Agent: + """Create Browser AI agent with selected configuration""" + llm_config = self.config_manager.get_llm_config(llm_config_name) + if not llm_config: + raise ValueError(f"LLM configuration '{llm_config_name}' not found") + + # Create LLM instance + llm = self.config_manager.create_llm_instance(llm_config) + + # Create browser if not exists + if not self.current_browser: + self.current_browser = Browser() + + # Create agent with callbacks + agent = Agent( + task=task, + llm=llm, + browser=self.current_browser, + register_new_step_callback=self.event_listener.handle_agent_step, + register_done_callback=self.event_listener.handle_agent_done, + generate_gif=False # Disable GIF generation for web interface + ) + + return agent + + async def _run_task(self, task: str, llm_config_name: str, max_steps: int = 20) -> str: + """Run browser automation task""" + try: + self.running_task = True + self.current_task_id = str(uuid.uuid4()) + + # Set task status + self.event_listener.set_task_status( + self.current_task_id, + TaskStatus.RUNNING, + 0 + ) + + # Create agent + self.current_agent = self._create_agent(llm_config_name, task) + + # Run task + history = await self.current_agent.run(max_steps=max_steps) + + # Get final result + if history.history and len(history.history) > 0: + last_item = history.history[-1] + if last_item.result and len(last_item.result) > 0: + final_result = last_item.result[-1] + if final_result.is_done: + return f"āœ… Task completed successfully!\n\n**Result:** {final_result.extracted_content}" + elif final_result.error: + return f"āŒ Task failed: {final_result.error}" + + return "āœ… Task execution completed" + + except Exception as e: + self.event_listener.set_task_status( + self.current_task_id or "unknown", + TaskStatus.FAILED, + 0 + ) + return f"āŒ Error executing task: {str(e)}" + finally: + self.running_task = False + + def _process_chat_message(self, message: str, llm_config: str, history: List[Dict[str, str]]) -> Tuple[List[Dict[str, str]], str, bool]: + """Process chat message and return updated history""" + if not message.strip(): + return history, "", False + + if self.running_task: + new_history = history + [ + {"role": "user", "content": message}, + {"role": "assistant", "content": "āš ļø A task is already running. Please wait for it to complete."} + ] + return new_history, "", False + + if llm_config == "No LLM configured": + new_history = history + [ + {"role": "user", "content": message}, + {"role": "assistant", "content": "āš ļø Please configure an LLM first."} + ] + return new_history, "", False + + # Add user message to history + new_history = history + [ + {"role": "user", "content": message}, + {"role": "assistant", "content": "šŸ”„ Starting task execution..."} + ] + + # Start task execution in background + self.current_task_id = str(uuid.uuid4()) + + # Run task asynchronously + def run_async_task(): + try: + loop = asyncio.new_event_loop() + asyncio.set_event_loop(loop) + result = loop.run_until_complete( + self._run_task(message, llm_config, max_steps=20) + ) + # Update the last message with result + if new_history: + new_history[-1]["content"] = result + except Exception as e: + if new_history: + new_history[-1]["content"] = f"āŒ Error: {str(e)}" + + # Start background task + threading.Thread(target=run_async_task, daemon=True).start() + + return new_history, "", True + + def _get_logs(self) -> str: + """Get current logs as string""" + if not self.log_buffer: + return "No logs yet..." + return "\n".join(self.log_buffer[-50:]) # Show last 50 logs + + def _add_llm_config(self, name: str, provider: str, model: str, api_key: str, + base_url: str, temperature: float) -> str: + """Add new LLM configuration""" + if not name.strip(): + return "Error: Configuration name is required" + + try: + provider_enum = LLMProvider(provider.lower()) + except ValueError: + return f"Error: Unsupported provider '{provider}'" + + # Create configuration + config = LLMConfig( + provider=provider_enum, + model=model, + api_key=api_key if api_key.strip() else None, + base_url=base_url if base_url.strip() else None, + temperature=temperature + ) + + # Test configuration + if not self.config_manager.test_llm_config(config): + return "Error: Failed to connect to LLM. Please check your configuration." + + # Save configuration + self.config_manager.add_llm_config(name, config) + return f"āœ… LLM configuration '{name}' added successfully!" + + def _get_current_status(self) -> str: + """Get current task status""" + if self.running_task: + return f"šŸ”„ **Status:** Running task '{self.current_task_id}'" + else: + return "⚪ **Status:** Idle" + + def _handle_stop_button(self, history: List[Dict[str, str]]) -> Tuple[List[Dict[str, str]], bool]: + """Handle stop button click""" + result = self.stop_task() + new_history = history + [{"role": "assistant", "content": result}] + return new_history, False + + def create_interface(self) -> gr.Interface: + """Create the Gradio interface""" + + with gr.Blocks( + title="Browser AI Chat", + theme=gr.themes.Soft(), + css=""" + .chat-container { height: 500px; overflow-y: auto; } + .logs-container { height: 300px; overflow-y: auto; font-family: monospace; } + .status-container { padding: 10px; background-color: #f0f0f0; border-radius: 5px; } + """ + ) as interface: + + gr.Markdown("# šŸ¤– Browser AI Chat Interface") + gr.Markdown("Interact with browser automation through a conversational interface") + + with gr.Row(): + with gr.Column(scale=2): + # Chat interface + chatbot = gr.Chatbot( + value=[], + height=500, + label="Chat with Browser AI", + elem_classes=["chat-container"], + type="messages" # Use modern message format + ) + + with gr.Row(): + msg_input = gr.Textbox( + placeholder="Describe what you want to do (e.g., 'Search for Python tutorials on Google')", + label="Your message", + scale=4 + ) + send_btn = gr.Button("Send", variant="primary") + stop_btn = gr.Button("ā¹ļø Stop", variant="stop", interactive=False) + + with gr.Column(scale=1): + # Configuration panel + gr.Markdown("## āš™ļø Configuration") + + llm_dropdown = gr.Dropdown( + choices=self._get_available_llm_configs(), + value=self._get_available_llm_configs()[0] if self._get_available_llm_configs() else None, + label="Select LLM", + interactive=True + ) + + refresh_btn = gr.Button("šŸ”„ Refresh", size="sm") + + # Status display + status_display = gr.Markdown( + value=self._get_current_status(), + elem_classes=["status-container"] + ) + + # Logs display + gr.Markdown("## šŸ“‹ Logs") + logs_display = gr.Textbox( + value=self._get_logs(), + label="Real-time logs", + max_lines=15, + elem_classes=["logs-container"], + interactive=False + ) + + # LLM Configuration Panel (collapsible) + with gr.Accordion("šŸ”§ Add New LLM Configuration", open=False): + with gr.Row(): + with gr.Column(): + config_name = gr.Textbox(label="Configuration Name", placeholder="e.g., 'my_openai'") + provider_dropdown = gr.Dropdown( + choices=["openai", "anthropic", "ollama"], + label="Provider", + value="openai" + ) + model_input = gr.Textbox(label="Model", placeholder="e.g., 'gpt-4o-mini'") + + with gr.Column(): + api_key_input = gr.Textbox( + label="API Key", + type="password", + placeholder="Enter API key (leave empty for Ollama)" + ) + base_url_input = gr.Textbox( + label="Base URL", + placeholder="http://localhost:11434 (for Ollama)" + ) + temperature_slider = gr.Slider( + minimum=0.0, + maximum=2.0, + value=0.1, + step=0.1, + label="Temperature" + ) + + add_config_btn = gr.Button("Add Configuration", variant="primary") + config_result = gr.Markdown() + + # Event handlers + def send_message(message, llm_config, history): + chat_history, cleared_input, enable_stop = self._process_chat_message(message, llm_config, history) + return chat_history, cleared_input, gr.Button(interactive=enable_stop) + + def stop_task_handler(history): + chat_history, disable_stop = self._handle_stop_button(history) + return chat_history, gr.Button(interactive=disable_stop) + + def refresh_llm_configs(): + return gr.Dropdown(choices=self._get_available_llm_configs()), gr.Button(interactive=self.running_task) + + def update_logs(): + return self._get_logs() + + def update_status(): + return self._get_current_status() + + def add_config(name, provider, model, api_key, base_url, temperature): + result = self._add_llm_config(name, provider, model, api_key, base_url, temperature) + return result, gr.Dropdown(choices=self._get_available_llm_configs()), gr.Button(interactive=self.running_task) + + # Wire up events + send_btn.click( + send_message, + inputs=[msg_input, llm_dropdown, chatbot], + outputs=[chatbot, msg_input, stop_btn] + ) + + msg_input.submit( + send_message, + inputs=[msg_input, llm_dropdown, chatbot], + outputs=[chatbot, msg_input, stop_btn] + ) + + stop_btn.click( + stop_task_handler, + inputs=[chatbot], + outputs=[chatbot, stop_btn] + ) + + refresh_btn.click( + refresh_llm_configs, + outputs=[llm_dropdown, stop_btn] + ) + + add_config_btn.click( + add_config, + inputs=[config_name, provider_dropdown, model_input, api_key_input, base_url_input, temperature_slider], + outputs=[config_result, llm_dropdown, stop_btn] + ) + + # Auto-refresh logs and status + def refresh_logs_periodically(): + while True: + time.sleep(2) + yield update_logs() + + def refresh_status_periodically(): + while True: + time.sleep(1) + yield update_status() + + # Note: For production, use gr.Interface.load() with event handlers + # instead of the deprecated 'every' parameter + + return interface + + def launch(self, **kwargs): + """Launch the web interface""" + interface = self.create_interface() + return interface.launch(**kwargs) + + +def main(): + """Main entry point for web interface""" + chat_interface = WebChatInterface() + chat_interface.launch( + share=False, + server_name="0.0.0.0", + server_port=7860, + inbrowser=True + ) + + +if __name__ == "__main__": + main() \ No newline at end of file diff --git a/demo_chat_interface.py b/demo_chat_interface.py new file mode 100755 index 0000000..c666c7e --- /dev/null +++ b/demo_chat_interface.py @@ -0,0 +1,213 @@ +#!/usr/bin/env python3 +""" +Demo script for Browser AI Chat Interface + +This script demonstrates the key features of the chat interface system. +""" + +import asyncio +import time +from browser_ai import Agent +from chat_interface import LogEventListener, ConfigManager +from chat_interface.config_manager import LLMConfig, LLMProvider + + +async def demo_event_system(): + """Demonstrate the event listening system""" + print("šŸŽÆ Demo: Event Listener System") + print("=" * 50) + + # Create event listener + listener = LogEventListener() + listener.start_listening() + + # Set up event handlers + def on_log(event): + print(f"šŸ“‹ [{event.timestamp.strftime('%H:%M:%S')}] {event.level.value.upper()}: {event.message}") + + def on_task_update(update): + print(f"šŸŽÆ Task {update.task_id}: {update.status.value} (Step {update.step_number})") + if update.result: + print(f"āœ… Result: {update.result[:100]}...") + + listener.subscribe_to_logs(on_log) + listener.subscribe_to_tasks(on_task_update) + + print("āœ… Event listener started and subscribed") + + # Simulate some events + from chat_interface.event_listener import LogEvent, LogLevel, TaskStatus, TaskUpdate + from datetime import datetime + + # Simulate log events + test_event = LogEvent( + timestamp=datetime.now(), + level=LogLevel.INFO, + message="Demo: Browser automation started", + source="demo", + task_status=TaskStatus.RUNNING + ) + listener._emit_log_event(test_event) + + # Simulate task update + test_update = TaskUpdate( + task_id="demo-task-123", + status=TaskStatus.RUNNING, + step_number=1, + total_steps=5, + current_action="Navigating to website", + result=None, + error=None, + timestamp=datetime.now() + ) + listener._emit_task_update(test_update) + + print("āœ… Demo events emitted") + time.sleep(1) + + # Get recent events + recent_events = listener.get_recent_events(10) + print(f"šŸ“Š Retrieved {len(recent_events)} recent events") + + listener.stop_listening() + print("šŸ›‘ Event listener stopped") + print() + + +def demo_config_system(): + """Demonstrate the configuration system""" + print("āš™ļø Demo: Configuration Management") + print("=" * 50) + + # Create config manager + config = ConfigManager() + + # Show existing configs + existing_configs = config.get_llm_configs() + print(f"šŸ“‹ Found {len(existing_configs)} existing LLM configurations:") + + for name, llm_config in existing_configs.items(): + print(f" • {name}: {llm_config.provider.value} - {llm_config.model}") + + # Add a demo config (Ollama - no API key needed) + demo_config = LLMConfig( + provider=LLMProvider.OLLAMA, + model="llama3.2", + base_url="http://localhost:11434", + temperature=0.1 + ) + + config.add_llm_config("demo_ollama", demo_config) + print("āœ… Added demo Ollama configuration") + + # Show app configuration + app_config = config.get_app_config() + print(f"šŸŽØ App theme: {app_config.theme}") + print(f"šŸ“ Auto-save: {app_config.auto_save}") + print(f"šŸ“Š Log level: {app_config.log_level}") + + # Update app config + config.update_app_config(theme="dark", max_history_items=50) + print("āœ… Updated app configuration") + + # Show available models + openai_models = config.get_available_models(LLMProvider.OPENAI) + print(f"šŸ¤– OpenAI models available: {len(openai_models)}") + print(f" Examples: {openai_models[:3]}") + + print() + + +def demo_integration(): + """Demonstrate Browser AI integration""" + print("šŸ”— Demo: Browser AI Integration") + print("=" * 50) + + # This demo shows how the chat interface would integrate with Browser AI + # without actually running browser automation (which requires display) + + print("šŸŽÆ Chat Interface Integration Points:") + print(" 1. Event Listener hooks into Browser AI logging") + print(" 2. Agent callbacks provide real-time updates") + print(" 3. Configuration manager handles LLM setup") + print(" 4. Web/Desktop apps provide user interfaces") + + print("\nšŸ”„ Typical workflow:") + print(" 1. User configures LLM in interface") + print(" 2. User types natural language task") + print(" 3. System creates Browser AI Agent") + print(" 4. Agent runs with real-time callbacks") + print(" 5. Progress streamed to chat interface") + print(" 6. Results displayed with formatting") + + print("\nšŸ’¬ Example chat interaction:") + print(" User: 'Search for Python tutorials on Google'") + print(" System: šŸ”„ Starting task execution...") + print(" System: [12:34:56] šŸ”µ Step 1: Navigating to Google...") + print(" System: [12:34:58] šŸ”µ Step 2: Searching for 'Python tutorials'...") + print(" System: [12:35:02] 🟢 āœ… Task Completed") + print(" System: Found 10 Python tutorial results on Google") + + print() + + +def demo_interfaces(): + """Demonstrate interface capabilities""" + print("šŸ–„ļø Demo: Interface Features") + print("=" * 50) + + print("🌐 Web Interface (Gradio):") + print(" • Modern web-based chat UI") + print(" • Real-time log streaming") + print(" • LLM configuration panel") + print(" • Auto-refreshing status updates") + print(" • Mobile-responsive design") + print(" • Launch: python launch_web.py") + + print("\nšŸ–„ļø Desktop Interface (Qt):") + print(" • Native desktop application") + print(" • Chat panel with message bubbles") + print(" • Configuration sidebar") + print(" • Real-time log display") + print(" • System tray integration") + print(" • Launch: python launch_desktop.py") + + print("\nšŸ”§ Configuration Features:") + print(" • Multiple LLM providers (OpenAI, Claude, Ollama)") + print(" • API key management") + print(" • Temperature and parameter controls") + print(" • Configuration validation") + print(" • Persistent storage") + + print("\nšŸ“Š Monitoring Features:") + print(" • Real-time log streaming") + print(" • Animated status indicators") + print(" • Task progress tracking") + print(" • Error handling and reporting") + print(" • History management") + + print() + + +async def main(): + """Run all demonstrations""" + print("šŸš€ Browser AI Chat Interface Demonstration") + print("=" * 60) + print() + + # Run demonstrations + await demo_event_system() + demo_config_system() + demo_integration() + demo_interfaces() + + print("āœ… All demonstrations completed!") + print("\nšŸŽÆ Next Steps:") + print("1. Configure your LLM API keys") + print("2. Launch the web or desktop interface") + print("3. Start chatting to control browser automation") + print("\nFor more information, see: chat_interface/README.md") + + +if __name__ == "__main__": + asyncio.run(main()) \ No newline at end of file diff --git a/example_integration.py b/example_integration.py new file mode 100755 index 0000000..352ccd8 --- /dev/null +++ b/example_integration.py @@ -0,0 +1,190 @@ +#!/usr/bin/env python3 +""" +Simple example showing Browser AI chat interface integration. + +This example demonstrates how to use the chat interface with Browser AI +for basic browser automation tasks. +""" + +import asyncio +import os +from browser_ai import Agent, Browser +from chat_interface import LogEventListener, ConfigManager +from chat_interface.event_listener import TaskStatus +from chat_interface.config_manager import LLMConfig, LLMProvider + + +async def example_with_event_streaming(): + """Example of running Browser AI with real-time event streaming""" + print("🌐 Browser AI Chat Integration Example") + print("=" * 50) + + # Set up event listener for real-time updates + event_listener = LogEventListener() + event_listener.start_listening() + + # Subscribe to events + def on_log_event(event): + timestamp = event.timestamp.strftime("%H:%M:%S") + status_icons = { + "idle": "⚪", "running": "šŸ”µ", "paused": "🟔", + "completed": "🟢", "failed": "šŸ”“" + } + icon = status_icons.get(event.task_status.value, "⚪") + print(f"šŸ“‹ [{timestamp}] {icon} {event.message}") + + def on_task_update(update): + if update.current_action: + print(f"šŸŽÆ Step {update.step_number}: {update.current_action}") + if update.result: + print(f"āœ… Result: {update.result}") + if update.error: + print(f"āŒ Error: {update.error}") + + event_listener.subscribe_to_logs(on_log_event) + event_listener.subscribe_to_tasks(on_task_update) + + # Set up configuration + config_manager = ConfigManager() + + # Check if we have any LLM configured + llm_configs = config_manager.get_llm_configs() + if not llm_configs: + print("āŒ No LLM configurations found!") + print("šŸ“ Please configure an LLM first:") + print(" 1. Set OPENAI_API_KEY environment variable, or") + print(" 2. Set ANTHROPIC_API_KEY environment variable, or") + print(" 3. Start Ollama server with: ollama serve") + return + + # Use the first available LLM config + config_name = list(llm_configs.keys())[0] + llm_config = llm_configs[config_name] + + print(f"šŸ¤– Using LLM: {config_name} ({llm_config.provider.value} - {llm_config.model})") + + try: + # Create LLM instance + llm = config_manager.create_llm_instance(llm_config) + + # Create browser (this would normally be visible) + browser = Browser() + + # Define a simple task + task = "Go to https://httpbin.org/get and extract the returned JSON data" + + print(f"šŸ“‹ Task: {task}") + print("šŸ”„ Starting execution...") + + # Create agent with event callbacks + agent = Agent( + task=task, + llm=llm, + browser=browser, + register_new_step_callback=event_listener.handle_agent_step, + register_done_callback=event_listener.handle_agent_done, + generate_gif=False # Disable GIF for this example + ) + + # Set initial task status + event_listener.set_task_status("example-task", TaskStatus.RUNNING, 0) + + # Run the agent + history = await agent.run(max_steps=10) + + # Process results + if history.history: + last_result = history.history[-1] + if last_result.result: + for result in last_result.result: + if result.is_done: + print("\nšŸŽ‰ Task completed successfully!") + print(f"šŸ“„ Final result: {result.extracted_content}") + break + elif result.error: + print(f"\nāŒ Task failed: {result.error}") + break + + except Exception as e: + print(f"āŒ Error during execution: {e}") + print("šŸ’” This might be because:") + print(" • No display available (headless environment)") + print(" • Browser not installed (run: playwright install)") + print(" • Network connectivity issues") + print(" • LLM API issues") + + finally: + event_listener.stop_listening() + if 'browser' in locals(): + await browser.close() + + +def example_config_setup(): + """Example of setting up LLM configurations""" + print("\nāš™ļø LLM Configuration Example") + print("=" * 50) + + config_manager = ConfigManager() + + # Example: Add OpenAI configuration + if os.getenv('OPENAI_API_KEY'): + openai_config = LLMConfig( + provider=LLMProvider.OPENAI, + model="gpt-4o-mini", + api_key=os.getenv('OPENAI_API_KEY'), + temperature=0.1 + ) + config_manager.add_llm_config("openai_example", openai_config) + print("āœ… Added OpenAI configuration") + + # Example: Add Anthropic configuration + if os.getenv('ANTHROPIC_API_KEY'): + claude_config = LLMConfig( + provider=LLMProvider.ANTHROPIC, + model="claude-3-sonnet-20241022", + api_key=os.getenv('ANTHROPIC_API_KEY'), + temperature=0.1 + ) + config_manager.add_llm_config("claude_example", claude_config) + print("āœ… Added Anthropic Claude configuration") + + # Example: Add Ollama configuration (no API key needed) + ollama_config = LLMConfig( + provider=LLMProvider.OLLAMA, + model="llama3.2", + base_url="http://localhost:11434", + temperature=0.1 + ) + config_manager.add_llm_config("ollama_example", ollama_config) + print("āœ… Added Ollama configuration") + + # Show all configurations + all_configs = config_manager.get_llm_configs() + print(f"\nšŸ“‹ Total configurations: {len(all_configs)}") + for name, config in all_configs.items(): + print(f" • {name}: {config.provider.value} - {config.model}") + + +async def main(): + """Run the example""" + print("šŸ¤– Browser AI Chat Interface Integration Example") + print("šŸ”— This demonstrates how the chat interface works with Browser AI") + print("=" * 70) + + # Set up configurations + example_config_setup() + + # Run browser automation with event streaming + print("\n" + "=" * 70) + await example_with_event_streaming() + + print("\n" + "=" * 70) + print("āœ… Example completed!") + print("\nšŸ’” To use the full chat interface:") + print(" 🌐 Web interface: python launch_web.py") + print(" šŸ–„ļø Desktop interface: python launch_desktop.py") + print("\nšŸ“š For more examples, see: chat_interface/README.md") + + +if __name__ == "__main__": + asyncio.run(main()) \ No newline at end of file diff --git a/launch_desktop.py b/launch_desktop.py new file mode 100755 index 0000000..5be617a --- /dev/null +++ b/launch_desktop.py @@ -0,0 +1,23 @@ +#!/usr/bin/env python3 +""" +Launch script for Browser AI Desktop Chat Interface +""" + +import sys +import os +from pathlib import Path + +# Add the project root to Python path +project_root = Path(__file__).parent +sys.path.insert(0, str(project_root)) + +from chat_interface.desktop_app import main + +if __name__ == "__main__": + print("šŸš€ Starting Browser AI Desktop Chat Interface...") + print("šŸ–„ļø The desktop application will open shortly") + print("āš™ļø Configure your LLM settings in the application") + print("šŸ’¬ Start chatting to control browser automation!") + print("-" * 50) + + main() \ No newline at end of file diff --git a/launch_web.py b/launch_web.py new file mode 100755 index 0000000..d0cbac9 --- /dev/null +++ b/launch_web.py @@ -0,0 +1,23 @@ +#!/usr/bin/env python3 +""" +Launch script for Browser AI Web Chat Interface +""" + +import sys +import os +from pathlib import Path + +# Add the project root to Python path +project_root = Path(__file__).parent +sys.path.insert(0, str(project_root)) + +from chat_interface.web_app import main + +if __name__ == "__main__": + print("šŸš€ Starting Browser AI Web Chat Interface...") + print("šŸ“ The web interface will open in your browser") + print("āš™ļø Configure your LLM settings in the web interface") + print("šŸ’¬ Start chatting to control browser automation!") + print("-" * 50) + + main() \ No newline at end of file diff --git a/pyproject.toml b/pyproject.toml index d5ed20c..98d4761 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -31,6 +31,7 @@ dependencies = [ "lmnr[langchain]>=0.4.59", "markdownify==0.14.1", "gradio>=5.44.1", + "PySide6>=6.5.0", ] [project.optional-dependencies] diff --git a/test_gemini_config.py b/test_gemini_config.py new file mode 100644 index 0000000..4f2b2ff --- /dev/null +++ b/test_gemini_config.py @@ -0,0 +1,56 @@ +#!/usr/bin/env python3 +""" +Test script to verify Gemini configuration is working +""" + +import os +from chat_interface.config_manager import ConfigManager, LLMProvider + +def test_gemini_config(): + """Test that Gemini configuration loads correctly""" + print("Testing Gemini configuration...") + + # Check if API key is available + google_key = os.getenv('GOOGLE_API_KEY') or os.getenv('GEMINI_API_KEY') + if not google_key: + print("āŒ No GOOGLE_API_KEY or GEMINI_API_KEY found in environment") + return False + + print(f"āœ… Found API key: {google_key[:10]}...") + + # Create config manager + config_manager = ConfigManager() + + # Check if Gemini config was created + configs = config_manager.get_llm_configs() + gemini_config = configs.get('gemini_pro') + + if not gemini_config: + print("āŒ Gemini configuration not found") + return False + + print(f"āœ… Gemini config found: {gemini_config.model} ({gemini_config.provider.value})") + + # Test creating LLM instance + try: + llm = config_manager.create_llm_instance(gemini_config) + print("āœ… Gemini LLM instance created successfully") + except Exception as e: + print(f"āŒ Failed to create Gemini LLM instance: {e}") + return False + + # Test simple inference + try: + response = llm.invoke("Say 'Hello from Gemini!'") + print(f"āœ… Gemini test response: {response.content[:50]}...") + return True + except Exception as e: + print(f"āŒ Gemini test failed: {e}") + return False + +if __name__ == "__main__": + success = test_gemini_config() + if success: + print("\nšŸŽ‰ Gemini configuration is working correctly!") + else: + print("\nāŒ Gemini configuration has issues.") diff --git a/uv.lock b/uv.lock index 8c8c121..476e84c 100644 --- a/uv.lock +++ b/uv.lock @@ -356,6 +356,7 @@ dependencies = [ { name = "playwright" }, { name = "posthog" }, { name = "pydantic" }, + { name = "pyside6" }, { name = "python-dotenv" }, { name = "requests" }, { name = "setuptools" }, @@ -388,6 +389,7 @@ requires-dist = [ { name = "playwright", specifier = ">=1.49.0" }, { name = "posthog", specifier = ">=3.7.0" }, { name = "pydantic", specifier = ">=2.10.4" }, + { name = "pyside6", specifier = ">=6.5.0" }, { name = "python-dotenv", specifier = ">=1.0.1" }, { name = "requests", specifier = ">=2.32.3" }, { name = "setuptools", specifier = ">=75.8.0" }, @@ -2696,6 +2698,54 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/bd/24/12818598c362d7f300f18e74db45963dbcb85150324092410c8b49405e42/pyproject_hooks-1.2.0-py3-none-any.whl", hash = "sha256:9e5c6bfa8dcc30091c74b0cf803c81fdd29d94f01992a7707bc97babb1141913", size = 10216, upload-time = "2024-09-29T09:24:11.978Z" }, ] +[[package]] +name = "pyside6" +version = "6.9.2" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "pyside6-addons" }, + { name = "pyside6-essentials" }, + { name = "shiboken6" }, +] +wheels = [ + { url = "https://files.pythonhosted.org/packages/43/42/43577413bd5ab26f5f21e7a43c9396aac158a5d01900c87e4609c0e96278/pyside6-6.9.2-cp39-abi3-macosx_12_0_universal2.whl", hash = "sha256:71245c76bfbe5c41794ffd8546730ec7cc869d4bbe68535639e026e4ef8a7714", size = 558102, upload-time = "2025-08-26T07:52:57.302Z" }, + { url = "https://files.pythonhosted.org/packages/12/df/cb84f802df3dcc1d196d2f9f37dbb8227761826f936987c9386b8ae1ffcc/pyside6-6.9.2-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:64a9e2146e207d858e00226f68d7c1b4ab332954742a00dcabb721bb9e4aa0cd", size = 558243, upload-time = "2025-08-26T07:52:59.272Z" }, + { url = "https://files.pythonhosted.org/packages/94/2d/715db9da437b4632d06e2c4718aee9937760b84cf36c23d5441989e581b0/pyside6-6.9.2-cp39-abi3-manylinux_2_39_aarch64.whl", hash = "sha256:a78fad16241a1f2ed0fa0098cf3d621f591fc75b4badb7f3fa3959c9d861c806", size = 558245, upload-time = "2025-08-26T07:53:00.838Z" }, + { url = "https://files.pythonhosted.org/packages/59/90/2e75cbff0e17f16b83d2b7e8434ae9175cae8d6ff816c9b56d307cf53c86/pyside6-6.9.2-cp39-abi3-win_amd64.whl", hash = "sha256:d1afbf48f9a5612b9ee2dc7c384c1a65c08b5830ba5e7d01f66d82678e5459df", size = 564604, upload-time = "2025-08-26T07:53:02.402Z" }, + { url = "https://files.pythonhosted.org/packages/dc/34/e3dd4e046673efcbcfbe0aa2760df06b2877739b8f4da60f0229379adebd/pyside6-6.9.2-cp39-abi3-win_arm64.whl", hash = "sha256:1499b1d7629ab92119118e2636b4ace836b25e457ddf01003fdca560560b8c0a", size = 401833, upload-time = "2025-08-26T07:53:03.742Z" }, +] + +[[package]] +name = "pyside6-addons" +version = "6.9.2" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "pyside6-essentials" }, + { name = "shiboken6" }, +] +wheels = [ + { url = "https://files.pythonhosted.org/packages/47/39/a8f4a55001b6a0aaee042e706de2447f21c6dc2a610f3d3debb7d04db821/pyside6_addons-6.9.2-cp39-abi3-macosx_12_0_universal2.whl", hash = "sha256:7019fdcc0059626eb1608b361371f4dc8cb7f2d02f066908fd460739ff5a07cd", size = 316693692, upload-time = "2025-08-26T07:33:31.529Z" }, + { url = "https://files.pythonhosted.org/packages/14/48/0b16e9dabd4cafe02d59531832bc30b6f0e14c92076e90dd02379d365cb2/pyside6_addons-6.9.2-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:24350e5415317f269e743d1f7b4933fe5f59d90894aa067676c9ce6bfe9e7988", size = 166984613, upload-time = "2025-08-26T07:33:47.569Z" }, + { url = "https://files.pythonhosted.org/packages/f4/55/dc42a73387379bae82f921b7659cd2006ec0e80f7052f83ddc07e9eb9cca/pyside6_addons-6.9.2-cp39-abi3-manylinux_2_39_aarch64.whl", hash = "sha256:af8dee517de8d336735a6543f7dd496eb580e852c14b4d2304b890e2a29de499", size = 162908466, upload-time = "2025-08-26T07:39:49.331Z" }, + { url = "https://files.pythonhosted.org/packages/14/fa/396a2e86230c493b565e2dc89dc64e4b1c63582ac69afe77b693c3817a53/pyside6_addons-6.9.2-cp39-abi3-win_amd64.whl", hash = "sha256:98d2413904ee4b2b754b077af7875fa6ec08468c01a6628a2c9c3d2cece4874f", size = 160216647, upload-time = "2025-08-26T07:42:18.903Z" }, + { url = "https://files.pythonhosted.org/packages/a7/fe/25f61259f1d5ec4648c9f6d2abd8e2cba2188f10735a57abafda719958e5/pyside6_addons-6.9.2-cp39-abi3-win_arm64.whl", hash = "sha256:b430cae782ff1a99fb95868043557f22c31b30c94afb9cf73278584e220a2ab6", size = 27126649, upload-time = "2025-08-26T07:42:37.696Z" }, +] + +[[package]] +name = "pyside6-essentials" +version = "6.9.2" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "shiboken6" }, +] +wheels = [ + { url = "https://files.pythonhosted.org/packages/08/21/41960c03721a99e7be99a96ebb8570bdfd6f76f512b5d09074365e27ce28/pyside6_essentials-6.9.2-cp39-abi3-macosx_12_0_universal2.whl", hash = "sha256:713eb8dcbb016ff10e6fca129c1bf2a0fd8cfac979e689264e0be3b332f9398e", size = 133092348, upload-time = "2025-08-26T07:43:57.231Z" }, + { url = "https://files.pythonhosted.org/packages/3e/02/e38ff18f3d2d8d3071aa6823031aad6089267aa4668181db65ce9948bfc0/pyside6_essentials-6.9.2-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:84b8ca4fa56506e2848bdb4c7a0851a5e7adcb916bef9bce25ce2eeb6c7002cc", size = 96569791, upload-time = "2025-08-26T07:44:41.392Z" }, + { url = "https://files.pythonhosted.org/packages/9a/a1/1203d4db6919b42a937d9ac5ddb84b20ea42eb119f7c1ddeb77cb8fdb00c/pyside6_essentials-6.9.2-cp39-abi3-manylinux_2_39_aarch64.whl", hash = "sha256:d0f701503974bd51b408966539aa6956f3d8536e547ea8002fbfb3d77796bbc3", size = 94311809, upload-time = "2025-08-26T07:46:44.924Z" }, + { url = "https://files.pythonhosted.org/packages/a8/e3/3b3e869d3e332b6db93f6f64fac3b12f5c48b84f03f2aa50ee5c044ec0de/pyside6_essentials-6.9.2-cp39-abi3-win_amd64.whl", hash = "sha256:b2f746f795138ac63eb173f9850a6db293461a1b6ce22cf6dafac7d194a38951", size = 72624566, upload-time = "2025-08-26T07:48:04.64Z" }, + { url = "https://files.pythonhosted.org/packages/91/70/db78afc8b60b2e53f99145bde2f644cca43924a4dd869ffe664e0792730a/pyside6_essentials-6.9.2-cp39-abi3-win_arm64.whl", hash = "sha256:ecd7b5cd9e271f397fb89a6357f4ec301d8163e50869c6c557f9ccc6bed42789", size = 49561720, upload-time = "2025-08-26T07:49:43.708Z" }, +] + [[package]] name = "python-dateutil" version = "2.9.0.post0" @@ -2985,6 +3035,18 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/e0/f9/0595336914c5619e5f28a1fb793285925a8cd4b432c9da0a987836c7f822/shellingham-1.5.4-py2.py3-none-any.whl", hash = "sha256:7ecfff8f2fd72616f7481040475a65b2bf8af90a56c89140852d1120324e8686", size = 9755, upload-time = "2023-10-24T04:13:38.866Z" }, ] +[[package]] +name = "shiboken6" +version = "6.9.2" +source = { registry = "https://pypi.org/simple" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/1a/1e/62a8757aa0aa8d5dbf876f6cb6f652a60be9852e7911b59269dd983a7fb5/shiboken6-6.9.2-cp39-abi3-macosx_12_0_universal2.whl", hash = "sha256:8bb1c4326330e53adeac98bfd9dcf57f5173a50318a180938dcc4825d9ca38da", size = 406337, upload-time = "2025-08-26T07:52:39.614Z" }, + { url = "https://files.pythonhosted.org/packages/3b/bb/72a8ed0f0542d9ea935f385b396ee6a4bbd94749c817cbf2be34e80a16d3/shiboken6-6.9.2-cp39-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:3b54c0a12ea1b03b9dc5dcfb603c366e957dc75341bf7cb1cc436d0d848308ee", size = 206733, upload-time = "2025-08-26T07:52:41.768Z" }, + { url = "https://files.pythonhosted.org/packages/52/c4/09e902f5612a509cef2c8712c516e4fe44f3a1ae9fcd8921baddb5e6bae4/shiboken6-6.9.2-cp39-abi3-manylinux_2_39_aarch64.whl", hash = "sha256:a5f5985938f5acb604c23536a0ff2efb3cccb77d23da91fbaff8fd8ded3dceb4", size = 202784, upload-time = "2025-08-26T07:52:43.172Z" }, + { url = "https://files.pythonhosted.org/packages/a4/ea/a56b094a4bf6facf89f52f58e83684e168b1be08c14feb8b99969f3d4189/shiboken6-6.9.2-cp39-abi3-win_amd64.whl", hash = "sha256:68c33d565cd4732be762d19ff67dfc53763256bac413d392aa8598b524980bc4", size = 1152089, upload-time = "2025-08-26T07:52:45.162Z" }, + { url = "https://files.pythonhosted.org/packages/48/64/562a527fc55fbf41fa70dae735929988215505cb5ec0809fb0aef921d4a0/shiboken6-6.9.2-cp39-abi3-win_arm64.whl", hash = "sha256:c5b827797b3d89d9b9a3753371ff533fcd4afc4531ca51a7c696952132098054", size = 1708948, upload-time = "2025-08-26T07:52:48.016Z" }, +] + [[package]] name = "six" version = "1.17.0" diff --git a/visual_demo.py b/visual_demo.py new file mode 100644 index 0000000..a80a845 --- /dev/null +++ b/visual_demo.py @@ -0,0 +1,118 @@ +#!/usr/bin/env python3 +""" +Visual demonstration of the Browser AI Chat Interface +""" + +import os + +def create_visual_demo(): + """Create a visual representation of the chat interface""" + + print("šŸŽØ Browser AI Chat Interface - Visual Overview") + print("=" * 80) + + # Web Interface Layout + print("🌐 WEB INTERFACE LAYOUT (Gradio)") + print("ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¬ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”") + print("│ šŸ¤– Browser AI Chat Interface │ āš™ļø Configuration │") + print("│ │ │") + print("│ ā”Œā”€ā”€ā”€ Chat History ───────────────┐ │ Select LLM: [OpenAI GPT-4ā–¼] │") + print("│ │ User: Search Python tutorials │ │ [šŸ”„ Refresh] │") + print("│ │ Bot: šŸ”„ Starting execution... │ │ │") + print("│ │ Bot: šŸ”µ Step 1: Google... │ │ Status: šŸ”µ Running Step 2 │") + print("│ │ Bot: 🟢 āœ… Task completed! │ │ │") + print("│ ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜ │ šŸ“‹ Real-time Logs │") + print("│ │ ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā” │") + print("│ ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā” │ │[08:15:32] šŸ”µ Navigating...│ │") + print("│ │ Your message: [____________] šŸ“¤ │ │ │[08:15:35] šŸ”µ Clicking... │ │") + print("│ ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜ │ │[08:15:38] 🟢 Success! │ │") + print("│ │ ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜ │") + print("ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”“ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜") + + print("\n" + "=" * 80) + + # Desktop Interface Layout + print("šŸ–„ļø DESKTOP INTERFACE LAYOUT (Qt)") + print("ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”") + print("│ šŸ¤– Browser AI Chat Interface āš™ļø Configuration │") + print("ā”œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¬ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¤") + print("│ šŸ’¬ Chat Messages │ LLM: [Claude 3.5 ā–¼] [šŸ”„] │") + print("│ │ [Add LLM Configuration] │") + print("│ ā”Œā”€ You ──────────────────────────┐ │ │") + print("│ │ Search for Python tutorials │ │ Status: 🟢 Completed │") + print("│ ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜ │ ━━━━━━━━━━━━━━━━━━━━━━━ │") + print("│ │ │") + print("│ ā”Œā”€ Assistant ────────────────────┐ │ šŸ“‹ Real-time Logs │") + print("│ │ āœ… Task completed! │ │ ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā” │") + print("│ │ Found Python tutorial results │ │ │[08:15:32] šŸ”µ Starting... │ │") + print("│ ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜ │ │[08:15:34] šŸ”µ Step 1: Nav... │ │") + print("│ │ │[08:15:36] šŸ”µ Step 2: Search │ │") + print("│ ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā” │ │[08:15:38] 🟢 Completed! │ │") + print("│ │ Type your message... [šŸ“¤]│ │ ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜ │") + print("│ ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜ │ │") + print("ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”“ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜") + + print("\n" + "=" * 80) + + # Architecture Diagram + print("šŸ—ļø ARCHITECTURE OVERVIEW") + print() + print("ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā” ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā” ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”") + print("│ Web App │ │ Desktop App │ │ Browser AI │") + print("│ (Gradio) │ │ (PyQt5) │ │ Library │") + print("ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¬ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜ ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¬ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜ ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¬ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜") + print(" │ │ │") + print(" ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¬ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜ │") + print(" │ │") + print(" ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā–¼ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā” │") + print(" │ Event Listener ā”‚ā—„ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜") + print(" │ Adapter │ (Hooks into logging)") + print(" ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”¬ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜") + print(" │") + print(" ā”Œā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā–¼ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”") + print(" │ Config Manager │") + print(" │ (Multi-LLM) │") + print(" ā””ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”€ā”˜") + + print("\n" + "=" * 80) + + # Feature Summary + print("⭐ KEY FEATURES IMPLEMENTED") + print() + print("šŸŽÆ Conversational Interface") + print(" • GitHub Copilot-style chat UI") + print(" • Natural language task input") + print(" • Real-time progress updates") + print() + print("šŸ”§ Multi-LLM Support") + print(" • OpenAI (GPT-4, GPT-3.5)") + print(" • Anthropic Claude") + print(" • Ollama (Local models)") + print(" • Easy provider addition") + print() + print("šŸ“Š Real-time Monitoring") + print(" • Live log streaming") + print(" • Animated status indicators") + print(" • Step-by-step progress") + print(" • Error handling & reporting") + print() + print("āš™ļø Configuration Management") + print(" • API key secure storage") + print(" • Parameter controls") + print(" • Connection testing") + print(" • Persistent settings") + print() + + print("=" * 80) + print("šŸš€ LAUNCH COMMANDS") + print() + print("🌐 Web Interface: python launch_web.py") + print("šŸ–„ļø Desktop Interface: python launch_desktop.py") + print("šŸ“‹ Demo: python demo_chat_interface.py") + print("šŸ”— Example: python example_integration.py") + print() + print("šŸ“š Documentation: chat_interface/README.md") + print("=" * 80) + +if __name__ == "__main__": + create_visual_demo() \ No newline at end of file