This README provides instructions for setting up AI Town to enable interactions between AI models using locally hosted models. The framework allows for simulated AI interactions based on the "Veil of Ignorance" concept. This setup is part of research on the possibilities of social simulation using language models, developed for submission to the 2024 8th Humanities Festival, 8th Artificial Intelligence Humanities University Student Thesis Contest (Awarded 3rd Prize).
- Title: Artificial Intelligence Humanities Paper Contest
- Topic: Social Simulation Based on Large Language Model
- Award: 3rd Prize
- Prerequisites
- Setup Instructions
- Installing Required Packages
- Running the Ollama Server
- Tunneling with Ngrok (optional)
- Files and Scripts
runOllama.ipynb: Colab setup instructionsjsonlutilities: Translation and markdown conversion scripts- AI Town Code Modifications
- Google Colab environment with GPU resources enabled
- Ngrok (for tunneling, if needed)
Recent updates to the a16z-infra/ai-town repository cause compatibility issues with this setup. To ensure stable operation, use the following commands to clone an earlier version of the repository:
git clone https://github.com/a16z-infra/ai-town
cd ai-town
git reset --hard 463b2aae93d11224b880194d4f60c14b3196cccaThis will revert the repository to a version compatible with your configuration. For information on running AI Town, please refer to the README.md file in the repository.
From now on, please run all code below in Colab.
Run the following code in Colab to install necessary utilities and packages, including GPU support and Ollama.
# Install necessary GPU tools for Ollama
!sudo apt-get install -y pciutils
!nvidia-smi
# Install Ollama
!curl -fsSL https://ollama.com/install.sh | sh
# Run Ollama in the background
!nohup ollama serve &Download and configure the quantized (Q4_K_M) Mistral 7B Instruct v0.2 or OpenHermes model. (Run only one of the two codes below.) :
- Using Mistral 7B Model
# Download the quantized Mistral 7B Instruct model (Source: https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GGUF)
!curl -L -O https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GGUF/resolve/main/mistral-7b-instruct-v0.2.Q4_K_M.gguf
# Configure model file for AI Town compatibility
!echo -e "FROM ./mistral-7b-instruct-v0.2.Q4_K_M.gguf\nSYSTEM \"\"\"You must play the role. \nMake sure to use a short sentence within 400 characters.\nAnd make sure to say only one person's line.\"\"\"\nPARAMETER stop \"<s>\"\nPARAMETER stop \"[INST]\"\nPARAMETER stop \"[/INST]\"\nPARAMETER stop \"</s>\"\nTEMPLATE \"\"\"{{ .System }}\n<s>[INST] {{ .Prompt }} [/INST] {{ .Response }} </s>\n\"\"\"" > Modelfile4AITownMistral
# Create the model in Ollama
!ollama create aiTownNPCMistral -f ./Modelfile4AITownMistral
!date- Using OpenHermes Model
# Download the quantized OpenHermes 2.5 model using ollama
!ollama pull openhermes
# Configure model file for AI Town compatibility
!echo -e "FROM openhermes\nSYSTEM \"\"\"You must play the role.\nMake sure to use a short sentence within 400 characters.\nAnd make sure to say only one person's line.\nPARAMETER stop \"\"\"<|im_end|>\"\"\"\nPARAMETER stop \"\"\"<|im_start|>\"\"\"\nTEMPLATE \"\"\"<|im_start|>system\n{{ .System }}<|im_end|>\n<|im_start|>user\n{{ .Prompt }}<|im_end|>\n<|im_start|>assistant\"\"\"" > Modelfile4AITownOpenHermes
# Create the model in Ollama
!ollama create aiTownNPC -f ./Modelfile4AITownOpenHermes
!dateTo verify the model setup, test it with a simple prompt:
# Test model response
!curl http://localhost:11434/api/generate -d '{"model": "aiTownNPCMistral", "prompt": "I use arch btw", "stream": false}'For AI Town to access the local model hosted in Colab, you may need to set up a tunnel using Ngrok. This step is only necessary if direct port access is restricted.
# Install Ngrok for tunneling
!curl -s https://ngrok-agent.s3.amazonaws.com/ngrok.asc | sudo tee /etc/apt/trusted.gpg.d/ngrok.asc >/dev/null
echo "deb https://ngrok-agent.s3.amazonaws.com buster main" | sudo tee /etc/apt/sources.list.d/ngrok.list
sudo apt update && sudo apt install ngrokAfter installation, configure and start the Ngrok tunnel on the specified port:
# Start Ngrok tunnel (example for port 11434)
!ngrok authtoken YOUR_NGROK_AUTH_TOKEN_HERE
!ngrok http 11434 --domain YOUR_NGROK_DOMAIN_HERE.ngrok-free.app --host-header="localhost:11434"Copy the forwarding URL generated by Ngrok and configure AI Town to access the model using this URL.
Modified files in ai-town include:
characters.ts,constants.ts,conversation.ts: Modifya16z-infra/ai-townoptions.
util-translate.py: Adds machine translations to conversation data using NHNDQ/nllb-finetuned-en2ko model.util-jsonl2markdown: Converts conversation data to markdown format for easier readability.- Data Files (
gpt.jsonl,mistral.jsonl,openhermes.jsonl): Preprocessed conversation data files.
This notebook provides all necessary code to install and run local models on Colab, including setting up Ollama and Ngrok.