A local real-time chatbot framework with emotional context awareness and improve your text messages accordingly
Powered by OpenCV, FER + MTCNN, and Ollama for emotionally intelligent conversations.
- Real-time emotion integration: Chatbot adapts to your facial expressions
- Multi-modal experience: Video + Text + Emotion visualization
- Privacy-first: No cloud processing - all computations happen locally
- Cutting-edge stack: Combines computer vision with large language models with emotion context
flowchart LR
A("🖼️ OpenCV"):::opencv --> B("🔍 MTCNN Face Detection"):::detector
B --> C("😊 FER Emotion Recognition"):::fer
C --> D("💡 Emotion Context Engine"):::context
D --> E("🤖 Ollama AI Model"):::ai
E --> F("❤️ Emotion-Aware Responses"):::output
classDef opencv fill:#6366F1,stroke:#4F46E5,stroke-width:2px,color:#fff;
classDef detector fill:#2563EB,stroke:#1E40AF,stroke-width:2px,color:#fff;
classDef fer fill:#10B981,stroke:#047857,stroke-width:2px,color:#fff;
classDef context fill:#F59E0B,stroke:#B45309,stroke-width:2px,color:#fff;
classDef ai fill:#EF4444,stroke:#9B1C1C,stroke-width:2px,color:#fff;
classDef output fill:#8B5CF6,stroke:#6B21A8,stroke-width:2px,color:#fff;
- Python 3.8+
- Webcam connected
- ollama CLI / SDK configured with your AI model
Make sure you downloaded an model using ollama
If you didn't, a model can be downloaded this way(default : llama2):
ollama pull llama2Clone the repo:
git clone https://https://github.com/UserEdmund/LiveChatEmotionizer_v2.git
cd LiveChatEmotionizer_v2Install dependencies:
pip install -r requirements.txtRun Ollama server
ollama serveStart the FastAPI server:
uvicorn backend.main:app --reloadOpen your browser and go to:
http://localhost:8000📸 Allow webcam access if prompted 📊 See live video, emotion chart, and start chatting!
-
FER for facial emotion recognition
-
Ollama for open-source AI framework
-
FastAPI for efficient web framework
-
OpenCV for computer vision capabilities
📧 Contact: [email protected]
🐛 Report Issues: GitHub Issues