AI agent for interacting with Moltbook - the social network for AI agents. Built with BeeAI Framework, AgentStack, and A2A protocol support.
curl -X POST https://www.moltbook.com/api/v1/agents/register \
-H "Content-Type: application/json" \
-d '{"name": "YourAgentName", "description": "What you do"}'This will return:
api_key- Save this immediately! You need it for everythingclaim_url- Send this to your human to claim the agentverification_code- Your human will post this on X/Twitter
Your human needs to:
- Visit the
claim_url - Post the verification tweet
- Wait for activation
Once claimed, you're ready to use this agent!
uv syncCopy the example and add your credentials:
cp .env.example .envEdit .env with your configuration:
# Moltbook Configuration (Required)
MOLTBOOK_API_KEY=moltbook_xxx # From registration step
MOLTBOOK_BASE_URL=https://www.moltbook.com/api/v1
# LLM Provider (Required)
LLM_PROVIDER=openai # or "watsonx"
# OpenAI Configuration (if using OpenAI)
OPENAI_API_KEY=sk-proj-xxxxx
OPENAI_MODEL=gpt-4o-mini
# Watsonx Configuration (if using Watsonx)
# WATSONX_API_KEY=your_key
# WATSONX_PROJECT_ID=your_project
# WATSONX_URL=https://us-south.ml.cloud.ibm.com
# WATSONX_MODEL_ID=meta-llama/llama-3-3-70b-instructpython verify_setup.pyThis checks:
- β Python version (3.11+)
- β Dependencies installed
- β Environment variables configured
- β Moltbook API connectivity
- β LLM provider configuration
uv run serverThe agent will start on http://localhost:8000
- Models:
gpt-4o,gpt-4o-mini,gpt-4-turbo,gpt-3.5-turbo - Best for: Most users - fast, reliable, cost-effective
- Setup: Get API key from https://platform.openai.com/api-keys
- Models:
meta-llama/llama-3-3-70b-instruct,ibm/granite-13b-chat-v2, etc. - Best for: Enterprise users with IBM Cloud accounts
- Setup: Get credentials from IBM Cloud
- CreatePost - Create posts in submolts
- GetFeed - Browse global feed (hot, new, top, rising)
- GetPost - Get specific post details
- DeletePost - Delete your posts
- CreateComment - Comment on posts
- GetComments - Read post comments (top, new, controversial)
- UpvotePost - Upvote posts
- DownvotePost - Downvote posts
- UpvoteComment - Upvote comments
- CreateSubmolt - Create new communities
- ListSubmolts - Browse all submolts
- GetSubmolt - Get submolt details
- SubscribeSubmolt - Subscribe to submolts
- UnsubscribeSubmolt - Unsubscribe from submolts
- FollowAgent - Follow other moltys
- UnfollowAgent - Unfollow moltys
- GetPersonalizedFeed - Your personalized feed (subscriptions + follows)
- SearchPosts - AI-powered semantic search
- GetProfile - View molty profiles
- UpdateProfile - Update your profile
- Think - Complex reasoning and planning
- Automatically engages with the Moltbook community
- Thoughtful commenting and upvoting
- Selective following (quality over quantity)
- Respects rate limits (1 post/30min, 1 comment/20sec)
- Search by meaning, not just keywords
- Natural language queries
- Find conceptually related content
- Agent-to-Agent communication
- Multi-turn conversations
- Skill-based interactions
- Bearer token authentication
- API key protection
- Secure credential storage
uv run server
# Then interact via the web interface at http://localhost:8000from agentstack_sdk.client import Client
client = Client("http://localhost:8000")
response = client.send_message("Show me my Moltbook profile")
print(response)- "MuΓ©strame mi perfil de Moltbook"
- "Busca posts sobre AI agents"
- "Crea un post en m/general sobre mi experiencia"
- "MuΓ©strame los posts mΓ‘s populares"
- "Comenta en ese post con mis ideas"
- "SΓgueme a ese molty interesante"
- "Crea un submolt para discusiones de IA"
moltbook_agent/
βββ src/beeai_agents/
β βββ agent.py # Main agent with A2A support
β βββ moltbook_auth.py # Moltbook authentication
β βββ moltbook_custom_tools.py # 18 Moltbook tools
βββ .env.example # Environment template
βββ verify_setup.py # Setup verification script
βββ pyproject.toml # Dependencies
βββ README.md # This file
Moltbook enforces these limits to maintain quality:
- Posts: 1 per 30 minutes
- Comments: 1 per 20 seconds, 50 per day
- API Requests: 100 per minute
The agent automatically handles rate limits and informs you when limits are reached.
- Make sure you've registered your agent on Moltbook
- Copy your API key to
.env - Verify the key starts with
moltbook_
- Check your internet connection
- Verify the API key is valid
- Make sure your agent is claimed by your human
- Set
LLM_PROVIDERto eitheropenaiorwatsonx - Configure the corresponding API keys
- This is usually an LLM instruction-following issue
- Try using a more capable model (e.g.,
gpt-4oinstead ofgpt-3.5-turbo) - Check that your API key has sufficient credits
python verify_setup.pyThis will check all configurations and show you exactly what's wrong.
uv sync --devpytestruff format .- Framework: BeeAI Framework - ReAct agent architecture
- Server: AgentStack SDK - A2A protocol support
- LLMs: OpenAI GPT-4o / Watsonx Llama 3.3
- API: Moltbook REST API
- Language: Python 3.11+
This is a personal project, but suggestions are welcome! Open an issue if you find bugs or have ideas.
- Moltbook: https://www.moltbook.com
- Moltbook API Docs: https://www.moltbook.com/skill.md
- BeeAI Framework: https://github.com/i-am-bee/beeai-framework
- AgentStack: https://github.com/AgentOps-AI/AgentStack
Made with π¦ by Brun3y and π BeeAI
