β¨ Context-aware, low-latency command-line autocomplete powered by Trie, FAISS, and LLMs.
ShellSage is my attempt at an intelligent autocomplete engine for your terminal that combines the speed of Trie-based search, the semantic understanding of FAISS, and the reasoning capabilities of LLMs to provide blazing-fast, contextually relevant command suggestions.
NOTE: This is only supported in fish at the moment
- β‘ Low-latency Suggestions using Trie prefix search
- π§ Context-aware Search using FAISS semantic embeddings
- π€ LLM-Powered Completions to refine and personalize suggestions
- π User Command History Integration
- ποΈ Hybrid Ranking Engine combining all sources
- π§΅ gRPC-based Streaming API for interactive clients
- πΎ FAISS Cache to reuse LLM results and reduce API latency
- π Modular, extensible engine built in Python
βββββββββββββββ
β Terminal β
βββββββββββββββ
β
gRPC backend client
β
βββββββββββββββββββββ
β ShellSage Core β
βββββββββββββββββββββ
βββββββββββββββββββββββββββββββββββββββββββββ
β β β β
TrieSearch FaissSearch LLMCompletion FaissCache
(prefix) (semantic) (GPT-like) (embedding cache)
# Clone the repo
git clone https://github.com/your-username/shellsage.git
cd shellsage
# Set up virtual environment
python3 -m venv .venv
source .venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Install ollama
<install ollama>
# Pull mistral
ollama pull mistralpython -m src.api.serverThis will start the gRPC server on localhost:50051. It uses a thread pool executor and gracefully handles streamed prompts from clients.
| Component | Description |
|---|---|
TrieSearch |
Fast prefix matching from shell history |
FaissSearch |
Vector search using sentence embeddings |
LLMCompletion |
Calls local AI model to refine suggestions |
FaissCache |
Embedding-keyed cache to avoid redundant LLM calls |
Ranker |
Combines and scores all results |
gRPC Server |
Streams back autocomplete results to clients |
src/
βββ api/ # gRPC server & protobufs
βββ prediction_engine/ # Core engine components
β βββ core/ # Trie, FAISS, LLM logic
β βββ data/ # History loader
β βββ ranking/ # Result ranking strategy
βββ utils/ # Logging, helpers
βββ protobuf/ # Generated gRPC code
- Shell plugin for Bash/Zsh/Fish autocompletion
- Persistent command history across sessions
- Create a proper RAG for LLM (might increase latency)
- Session-aware completions
MIT License Β© 2025 Harsh S.