A Personal project of mine to get a better understanding of rust and generative AI its a system analysis tool built that combines real-time resource monitoring with AI-powered security insights.
- Real-time system resource monitoring (CPU, memory, network, disk usage)
- AI-powered security analysis of running processes
- Clean, intuitive metrics visualization
- Rust (1.75.0 or later)
- Docker and Docker Compose
-
Clone the repository:
git clone https://github.com/Greatlakescoder/valhalla cd valhalla
-
Install dependencies:
cargo install --path .
-
Run the tests:
cargo test
-
Start the application:
cargo run
The application runs in a containerized environment with three main services:
-
Odin Service (Backend API)
- Runs on port 3000
- Built from custom Rust backend Dockerfile
- Handles system analysis and AI processing
-
Frontend Service
- Runs on port 5173
- Built with Vite
- Communicates with Odin service
- Configurable API URL through environment variables
-
Ollama Server
- Local AI model server
- Runs on port 11434
- GPU-accelerated using NVIDIA support
- Persistent model storage in
/mnt/valhalla/ollama-server
- Docker and Docker Compose (v3.8 or later)
- NVIDIA GPU with appropriate drivers
- NVIDIA Container Toolkit installed
-
Build and start all services:
docker compose up -d
-
View service logs:
# All services docker compose logs -f # Specific service docker compose logs -f odin docker compose logs -f frontend docker compose logs -f ollama-server
-
Stop and remove containers:
docker compose down
- All services communicate over a bridged network named
app-network
- Frontend can access the backend via
http://localhost:3000
- Ollama server is restricted to localhost access on port 11434
The Ollama server is configured to utilize all available NVIDIA GPUs for model inference. Ensure your system has:
- NVIDIA GPU drivers installed
- NVIDIA Container Toolkit configured
- Appropriate GPU capabilities
Update the local file in the configuration folder in project root
Example:
monitor:
ollama_url: "http://localhost:11434"
model: mistral
context_size: 5000
offline: true
# Run all tests
cargo test
# Run with logging
RUST_LOG=debug cargo test
This project follows the official Rust style guidelines. To ensure your code is properly formatted:
# Check formatting
cargo fmt -- --check
# Apply formatting
cargo fmt
# Run clippy for linting
cargo clippy
cargo build --release
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature
) - Commit your changes (
git commit -m 'Add some amazing feature'
) - Push to the branch (
git push origin feature/amazing-feature
) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- sysinfo crate is key component https://crates.io/crates/sysinfo