Empower AI with more System Permissions through an the Middleware Container for Advanced Automation. Serves as the Middleware Container between large AI models and applications.
- Technical Route 1: Programmatic Parsing of AI-Returned Content and Command Execution 🛠️💻
- Technical Route 2: Training an AI Model to Parse and Execute Commands Independently 🤖📚
- Do Not Expose Your Docker Interfaces 🔒: Exposing your Docker interfaces can lead to the risk of malicious attacks by unauthorized users. It's crucial to secure your Docker environment to prevent potential threats.
- Project in Early Stage
⚠️ : This project is currently in its early development phase, and security measures are still being improved. There is a possibility that attackers could bypass Docker to target the host system. We recommend using this project in a secure environment until further security enhancements are implemented. - Resource Consumption Warning 💻: Enabling full automation for the AI can result in significant resource consumption. Please proceed with caution when enabling this feature, and use it judiciously based on your system's capacity.
- 🤖 AI Integration: Seamlessly integrate with various AI models (Qwen, OpenAI, Gemini)
- 🔄 Auto-Execution: Automatically parse and execute commands from AI responses
- 📁 File Management: Create and manage files based on AI instructions
- 🖥️ Terminal Integration: Real-time command execution feedback
- 🔍 Error Handling: Automatic error detection and retry mechanism
- 💾 Persistent Storage: Save chat history and configurations
-
Frontend:
- Vue 3
- TypeScript
- Element Plus
- Pinia
- Vite
-
Backend:
- Node.js
- Express
- Socket.IO
- TypeScript
- Node.js (v18.0.0 or higher)
- npm (v9.0.0 or higher)
- PowerShell (v5.1 or higher)
- Git
- Clone the repository
git clone https://github.com/supersuperbruce/free-ai-docker.git
cd free-ai-docker
- Install all dependencies using requirements.txt
# Install backend dependencies
cd backend
npm install $(cat requirements.txt | grep "Backend" -A 10 | grep -v "Backend" | grep -v "^$" | tr '\n' ' ')
# Install frontend dependencies
cd ../frontend
npm install $(cat ../requirements.txt | grep "Frontend" -A 10 | grep -v "Frontend" | grep -v "^$" | tr '\n' ' ')
- Install backend dependencies
cd backend
npm install [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] @types/[email protected] @types/[email protected] @types/[email protected]
- Install frontend dependencies
cd frontend
npm install [email protected] [email protected] @vitejs/[email protected] [email protected] @vue/[email protected]
- Start backend server
cd backend
npm run dev
# Server will start on http://localhost:3000
- Start frontend development server
cd frontend
npm run dev
# Frontend will be available on http://localhost:5173
- Access the application
- Open your browser and navigate to
http://localhost:5173
- Configure your AI provider settings in the configuration panel
Configure the following in the settings panel:
API Key
: Your AI provider's API keyAPI URL
: The API endpoint URLModel
: AI model nameSave Path
: Directory path for file operations
-
Qwen
- URL: https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation
- Models: qwen-turbo, qwen-plus, qwen-max
-
OpenAI
- URL: https://api.openai.com/v1/chat/completions
- Models: gpt-3.5-turbo, gpt-4
-
Gemini
- URL: https://generativelanguage.googleapis.com/v1/models/gemini-pro:generateContent
- Models: gemini-pro
-
Ollama
- URL: http://localhost:11434/api/chat
- Models:
- llama2
- codellama
- mistral
- neural-chat
- starling-lm
- Features:
- Local deployment
- No API key required
- Multiple model support
- Low latency responses
- Custom model loading
- Get your API key from the AI provider
- Configure the API settings in the configuration panel:
- API Key
- API URL
- Model Name
- Save Path
-
Configure API Settings
- Enter your API credentials
- Set the save path for generated files
-
Start Chatting
- Type your request in the chat input
- AI will respond and automatically execute commands
- View execution results in the terminal panel
-
Error Handling
- System automatically detects execution errors
- AI provides alternative solutions
- Retry mechanism ensures task completion
The system automatically detects and executes:
- Shell commands
- Python scripts
- Batch files
- File operations
- View command execution in real-time
- See detailed error messages
- Track operation progress
- Automatic error detection
- AI-powered error resolution
- Smart retry mechanism
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature
) - Commit your changes (
git commit -m 'Add some AmazingFeature'
) - Push to the branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Thanks to all contributors
- Inspired by the AI development community
- Built with ❤️ using Vue.js and Node.js
Project Link: https://github.com/supersuperbruce/free-ai-docker
⭐️ If you find this project fun, please consider giving it a star!