Skip to content

matthewaltenburg/Deepthink

Repository files navigation

🌟 DeepThink - A Aouthern Cross AI Team

🎯 Our Mission

DeepThink is an AI-driven research team under Southern Cross AI, focused on building a modular, containerized AI application development platform. Our system is based on Ollama (for running large language models), LangChain (for orchestration), and Gradio (for web-based interaction). We aim to deliver a flexible, cross-platform development pipeline using Docker, enabling users to rapidly prototype and deploy offline AI apps.


🔍 Key Objectives

🏗️ Containerized Development Framework

  • Develop a custom Docker image combining Ollama, LangChain, and Gradio.
  • Enable local execution of LLMs with modular backend/frontend separation.
  • Ensure image is compatible across platforms (x86, ARM).

🎮 Pipeline & Interaction Design

  • Implement a prompt → chain → response workflow using LangChain.
  • Use Gradio to build an interactive frontend that starts automatically.
  • Support optional modules like document-based Q&A using local knowledge.

🏗 Implementation Plan

✅ Step 1: Foundation Setup & Model Loading

  • Create a multi-stage Dockerfile with Ollama.
  • Integrate model caching, environment variables, and readiness probes.
  • Support both CPU and GPU acceleration.

✅ Step 2: LangChain Integration

  • Install LangChain and implement basic LLMChain logic.
  • Use LCEL and Jinja2 templates for structured chaining.
  • Add support for conversation memory, structured outputs, and asynchronous tasks.

✅ Step 3: Gradio Interface Development

  • Build a Gradio Blocks-based UI with real-time chat interaction.
  • Include features like KaTeX rendering, session persistence, and light/dark modes.
  • Automatically launch frontend on container startup.

✅ Step 4: Deployment and Documentation

  • Provide Docker Compose and Kubernetes deployment scripts.
  • Integrate monitoring tools like Prometheus and OpenTelemetry.
  • Document APIs, configs, and troubleshooting.

✅ Step 5 (Optional): Knowledge Base Expansion

  • Implement hybrid retrieval using BM25 + vector similarity.
  • Add RAG pipelines with semantic chunking and query rewriting.
  • Support multi-format documents and persistent embeddings.

🌍 Why It Matters

By combining cutting-edge open-source tools in a unified platform, DeepThink enables developers to:

  • Run LLMs locally with privacy and speed.
  • Rapidly prototype AI workflows with reusable components.
  • Easily adapt the stack for future extensions like voice input/output and mobile deployment.

🚀 Join us as we shape the future of modular, local-first AI development! 🎮



🎥 Our Work

🎬 Click the thumbnail above or watch on YouTube to see our team introduce the DeepThink project, demonstrate key features, and walk through our development process.



📄 Additional Project Documents

You can find our supporting files and documentation here:


👥 Team Members

Name Role Email
XiangyuTan System Integration and Logic Control u7779491@anu.edu.au
Diming Xu Frontend Interaction and UI u7705332@anu.edu.au
Zhuiqi Lin Tech Lead u7733924@anu.edu.au
Boyang Zhang Documentation and Testing u7760642@anu.edu.au
Qingchuan Rui Langchain Engineer u7776331@anu.edu.au
Dongze Yu Extension Function and Future Planning u7775416@anu.edu.au

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors