References:
https://github.com/google-gemini/cookbook/blob/main/gemini-2/README.md
https://github.com/google-gemini/cookbook/blob/main/gemini-2/live_api_starter.py
The new Live API requires:
https://pypi.org/project/google-genai/
https://github.com/googleapis/python-genai
This PyPI package doesn't support Live API:
https://pypi.org/project/google-generativeai/
https://github.com/google-gemini/generative-ai-python
Notes as of 2024-12-13:
google-generativeai
- old python sdk, for Gemini API in Google AI Studio only
google-vertexai
- more complex, for Gemini LLM models in Google Vertex AI only
google-genai
- new one sdk, for both VertexAI and Gemini API. Supports the Live API.
The new Google Gen AI SDK provides a unified interface to Gemini 2.0 through both the Gemini Developer API and the Gemini Enterprise API ( Vertex AI).
With a few exceptions, code that runs on one platform will run on both. The Gen AI SDK also supports the Gemini 1.5 models.
Python
The Google Gen AI SDK for Python is available on PyPI and GitHub:
google-genai on PyPI --> pip install google-genai
python-genai on GitHub
To learn more, see the Python SDK reference: https://googleapis.github.io/python-genai/
Quickstart
- Import libraries
from google import genai from google.genai import types
- Create a client
client = genai.Client(api_key='YOUR_API_KEY')
- Generate content
response = client.models.generate_content( model='gemini-2.0-flash-exp', contents='What is your name?' ) print(response.text)
The Live API Starter is a Python application that implements real-time audio and video interaction with Google's Gemini AI model. It creates a bidirectional communication channel where users can send text, audio, and video input while receiving audio and text responses from the model in real-time.
- Real-time audio input/output processing
- Video capture and streaming
- Text-based interaction
- Asynchronous operation
- Bidirectional communication with Gemini AI model
Main class that manages the audio/video streaming pipeline and communication with the Gemini AI model.
Initializes queues for audio/video processing and sets up session management.
Handles text input from the user and sends it to the Gemini session.
Captures and processes a single video frame, converting it to JPEG format with size constraints.
Continuously captures video frames from the default camera and adds them to the video queue.
Sends captured video frames to the Gemini session.
Sets up and manages audio input stream from the microphone.
Sends audio chunks from the output queue to the Gemini session.
Processes responses from the Gemini model, handling both text and audio data.
Manages audio playback of responses received from the model.
Main execution method that coordinates all the async tasks and manages the session lifecycle.
- FORMAT: Set to pyaudio.paInt16 for audio format
- CHANNELS: Set to 1 for mono audio
- SEND_SAMPLE_RATE: 16000Hz for input audio
- RECEIVE_SAMPLE_RATE: 24000Hz for output audio
- CHUNK_SIZE: 512 bytes for audio processing
- MODEL: Uses "models/gemini-2.0-flash-exp" for AI interactions
- Uses asyncio for concurrent operations
- Implements PyAudio for audio handling
- Uses OpenCV (cv2) for video capture
- Integrates with Google's Genai client
- Supports Python 3.11+ with fallback for earlier versions