Gemini AI Assistant is an application that integrates Google's Gemini AI models with the LINE Messaging API, allowing you to create your own AI assistant accessible through the LINE mobile app. This project is a fork of the original GPT AI Assistant, with all OpenAI components removed and replaced with Google Gemini functionality.
- Conversational AI: Chat with Google's powerful Gemini Pro models
- Multimodal Capabilities: Send images and get AI responses using Gemini Pro Vision
- Web Search Integration: Get real-time information through SerpAPI integration
- Multiple Commands: Various utility commands for different AI interactions
- Multilingual Support: Supports English, Chinese, and Japanese
- 2024-05-11: The
5.0version now exclusively supports Google Gemini models. 🔥 - 2024-05-11: Completely migrated from OpenAI to Google Gemini API.
- Google Gemini API key
- LINE Messaging API channel
- Node.js environment
- (Optional) SerpAPI key for web search functionality
These must be set for the application to function properly:
# LINE Configuration
LINE_CHANNEL_ACCESS_TOKEN=your_line_channel_access_token
LINE_CHANNEL_SECRET=your_line_channel_secret
# Gemini Configuration
GEMINI_API_KEY=your_gemini_api_key
These are not strictly required but are recommended for full functionality:
# App Configuration
APP_PORT=3000
# Search Configuration
SERPAPI_API_KEY=your_serpapi_api_key
There are many optional environment variables with sensible defaults. You can generate a sample .env file with all variables and their descriptions:
npm run generate-envThe application includes built-in environment variable verification that runs at startup. It will:
- Check if all required environment variables are set
- Validate the format/values of certain variables
- Provide warnings for recommended but missing variables
You can also run the verification manually:
# Check environment variables
npm run check-env
# List all required environment variables
npm run check-env:required
# List all environment variables
npm run check-env:all
# Generate a sample .env file
npm run generate-envDocumentation is currently being updated for the Gemini version. For reference, you can check the original project documentation:
-
Clone the repository:
git clone https://github.com/bigcan/gpt-ai-assistant.git cd gemini-ai-assistant -
Install dependencies:
npm install
-
Generate a sample
.envfile:npm run generate-env
-
Edit the
.env.samplefile with your configuration and rename it to.env. -
Verify your environment variables:
npm run check-env
-
Start the development server:
npm run dev
-
Clone the repository:
git clone https://github.com/bigcan/gpt-ai-assistant.git cd gemini-ai-assistant -
Generate a sample
.envfile:npm run generate-env
-
Edit the
.env.samplefile with your configuration and rename it to.env. -
Verify your environment variables:
npm run check-env
-
Build and run with Docker Compose:
docker-compose up -d
This application can also be deployed to Vercel. See the Vercel documentation for more details.
The assistant supports various commands, including:
/talk [message]- Chat with the AI assistant/search [query]- Search the web and get AI-enhanced results/forget- Clear the conversation history/continue- Continue the previous response/retry- Regenerate the last response/activate- Activate the AI assistant/deactivate- Deactivate the AI assistant/version- Check the current version/help- Show available commands
- Image generation is not supported with Gemini API (unlike the original OpenAI version)
- Audio transcription is not available in this version
- Original GPT AI Assistant by memochou1993
- jayer95 - Debugging and testing
- kkdai - Idea of
sumcommand - Dayu0815 - Idea of
searchcommand - All other contributors
If you have any questions or suggestions, please open an issue on GitHub.
Detailed changes for each release are documented in the release notes.