Skip to content
Matthew Altenburg edited this page May 16, 2025 · 2 revisions

๐ŸŒŸ DeepThink - A Aouthern Cross AI Team

๐ŸŽฏ Our Mission

DeepThink is an AI-driven research team under Southern Cross AI, focused on building a modular, containerized AI application development platform. Our system is based on Ollama (for running large language models), LangChain (for orchestration), and Gradio (for web-based interaction). We aim to deliver a flexible, cross-platform development pipeline using Docker, enabling users to rapidly prototype and deploy offline AI apps.


๐Ÿ” Key Objectives

๐Ÿ—๏ธ Containerized Development Framework

  • Develop a custom Docker image combining Ollama, LangChain, and Gradio.
  • Enable local execution of LLMs with modular backend/frontend separation.
  • Ensure image is compatible across platforms (x86, ARM).

๐ŸŽฎ Pipeline & Interaction Design

  • Implement a prompt โ†’ chain โ†’ response workflow using LangChain.
  • Use Gradio to build an interactive frontend that starts automatically.
  • Support optional modules like document-based Q&A using local knowledge.

๐Ÿ— Implementation Plan

โœ… Step 1: Foundation Setup & Model Loading

  • Create a multi-stage Dockerfile with Ollama.
  • Integrate model caching, environment variables, and readiness probes.
  • Support both CPU and GPU acceleration.

โœ… Step 2: LangChain Integration

  • Install LangChain and implement basic LLMChain logic.
  • Use LCEL and Jinja2 templates for structured chaining.
  • Add support for conversation memory, structured outputs, and asynchronous tasks.

โœ… Step 3: Gradio Interface Development

  • Build a Gradio Blocks-based UI with real-time chat interaction.
  • Include features like KaTeX rendering, session persistence, and light/dark modes.
  • Automatically launch frontend on container startup.

โœ… Step 4: Deployment and Documentation

  • Provide Docker Compose and Kubernetes deployment scripts.
  • Integrate monitoring tools like Prometheus and OpenTelemetry.
  • Document APIs, configs, and troubleshooting.

โœ… Step 5 (Optional): Knowledge Base Expansion

  • Implement hybrid retrieval using BM25 + vector similarity.
  • Add RAG pipelines with semantic chunking and query rewriting.
  • Support multi-format documents and persistent embeddings.

๐ŸŒ Why It Matters

By combining cutting-edge open-source tools in a unified platform, DeepThink enables developers to:

  • Run LLMs locally with privacy and speed.
  • Rapidly prototype AI workflows with reusable components.
  • Easily adapt the stack for future extensions like voice input/output and mobile deployment.

๐Ÿš€ Join us as we shape the future of modular, local-first AI development! ๐ŸŽฎ


๐Ÿ“„ Additional Project Documents

You can find our supporting files and documentation here:


๐Ÿ‘ฅ Team Members

Name Role Email
XiangyuTan Team member u7779491@anu.edu.au
Diming Xu Team member u7705332@anu.edu.au
Zhuiqi Lin Team member u7733924@anu.edu.au
Boyang Zhang Team member u7760642@anu.edu.au
Qingchuan Rui Team member u7776331@anu.edu.au
Dongze Yu Team member u7775416@anu.edu.au