Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 17 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,23 @@ pip install -r requirements.txt

**Note:** Installing `texlive-full` can take a long time. You may need to [hold Enter](https://askubuntu.com/questions/956006/pregenerating-context-markiv-format-this-may-take-some-time-takes-forever) during the installation.

**Note for MacOS Users:**

The command provided above for installing `pdflatex` (`sudo apt-get install texlive-full`)
works only on Debian-based Linux systems and will not work on macOS. macOS users should install **MacTeX**, which includes `pdflatex` and the full TeX Live distribution. This can be done using Homebrew:

```sh
brew install mactex
```

⚠ **Note:** MacTeX is quite large (~5GB). If you prefer a smaller installation without GUI tools, you can install the no-GUI version (~2GB):

```sh
brew install mactex-no-gui
```

Alternatively, MacTeX can be downloaded directly from the [MacTeX website](https://tug.org/mactex/).

### Supported Models and API Keys

We support a wide variety of models, including open-weight and API-only models. In general, we recommend using only frontier models above the capability of the original GPT-4. To see a full list of supported models, see [here](https://github.com/SakanaAI/AI-Scientist/blob/main/ai_scientist/llm.py).
Expand Down
71 changes: 71 additions & 0 deletions templates/japan_declining_birth_rate/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
# Using the AI Scientist to address Japan's Declining Birth Rate

This project demonstrates an AI-driven approach to evaluating and optimizing government policies that aim to reverse Japan's declining birth rate. Using The AI Scientist, this project simulates multiple AI-generated policies, applies a neural network model, and identifies the most effective interventions.


## AI Scientist Generated Paper

[Link to Paper](https://drive.google.com/file/d/1AnR6ZgkgHhiTxMfGM731Heq5QUiDZhNM/view?usp=sharing)


## Installation

```bash
# Navigate to the project directory
cd templates/japan_declining_birth_rate

# Activate conda environment
conda activate ai_scientist
```

## Running the Baseline Template

1. Train the neural network baseline model:

```bash
python experiment.py --out_dir run_0
```

2. Generate visualization plots:

```bash
python plot.py
```


## Running AI Scientist

1. Initialize and run The AI Scientist:

```bash
python launch_scientist.py \
--model "claude-3-5-sonnet-20241022" \
--experiment japan_declining_birth_rate \
--num-ideas 1
```

*(Modify `--num-ideas` based on the number of policy variations to explore.)*


## How It Works

1. Policy Generation:
- AI creates multiple unique policy interventions.
- Each policy consists of budget allocation, duration, and an expected impact on birth rates.

2. Neural Network Modeling:
- A simple fully connected neural network learns to predict policy effectiveness.
- Inputs: Budget + Duration
- Output: Expected birth rate increase

3. Optimization & Evaluation:
- AI iterates through policies to find cost-effective strategies.
- Results are saved in `final_info.json`.


## Credits

This project builds upon:

- The AI Scientist System
- Repository: [https://github.com/SakanaAI/AI-Scientist](https://github.com/SakanaAI/AI-Scientist)
117 changes: 117 additions & 0 deletions templates/japan_declining_birth_rate/experiment.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,117 @@
import argparse
import json
import os
import numpy as np
import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader

# Hyperparameters
NUM_POLICIES = 100 # Number of AI-generated policies to test
LEARNING_RATE = 0.001
BATCH_SIZE = 32

class PolicyDataset(Dataset):
"""Dataset to store and retrieve AI-generated policies and their simulated outcomes."""
def __init__(self, num_policies=NUM_POLICIES):
self.num_policies = num_policies
self.policies = self.generate_policies()

def generate_policies(self):
"""Generate random policy interventions for simulation."""
policies = []
for _ in range(self.num_policies):
budget = np.random.uniform(100, 1000) # In billions of yen
duration = np.random.uniform(1, 10) # Duration in years
effect = np.random.uniform(0.5, 5.0) # Expected birth rate increase per 1000 people
policies.append((budget, duration, effect))
return policies

def __len__(self):
return len(self.policies)

def __getitem__(self, idx):
budget, duration, effect = self.policies[idx]
return torch.tensor([budget, duration]), torch.tensor([effect])

class PolicyImpactModel(nn.Module):
"""Simple neural network to model birth rate impact from policy interventions."""
def __init__(self):
super(PolicyImpactModel, self).__init__()
self.fc1 = nn.Linear(2, 128) # Adjust input size to match the number of input features
self.fc2 = nn.Linear(128, 64)
self.fc3 = nn.Linear(64, 1)
self.relu = nn.ReLU()

def forward(self, x):
x = self.relu(self.fc1(x))
x = self.relu(self.fc2(x))
x = self.fc3(x)
return x

class Trainer:
"""Handles training and evaluating the AI Scientist-generated policies."""
def __init__(self, model, dataloader, learning_rate=LEARNING_RATE):
self.model = model
self.dataloader = dataloader
self.criterion = nn.MSELoss()
self.optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

def train(self, epochs=10):
for epoch in range(epochs):
total_loss = 0
for inputs, targets in self.dataloader:
self.optimizer.zero_grad()
outputs = self.model(inputs.float())
loss = self.criterion(outputs, targets.float())
loss.backward()
self.optimizer.step()
total_loss += loss.item()
print(f"Epoch {epoch+1}, Loss: {total_loss / len(self.dataloader):.4f}")

def evaluate(self, test_data):
"""Evaluates the model on test policies."""
self.model.eval()
predictions = []
with torch.no_grad():
for inputs, _ in test_data:
predictions.append(self.model(inputs.float()).item())
return predictions

def preprocess_results(results):
"""Convert list of results to a dictionary format expected by the plotting function."""
processed_results = {}
for policy_result in results:
policy_key = str(tuple(policy_result["policy"])) # Convert tuple to string
processed_results[policy_key] = {"means": policy_result["predicted_impact"]}
return processed_results

def main():
parser = argparse.ArgumentParser(description="Run birth rate policy experiment")
parser.add_argument("--out_dir", type=str, default="run_0", help="Output directory")
args = parser.parse_args()
os.makedirs(args.out_dir, exist_ok=True)

dataset = PolicyDataset()
dataloader = DataLoader(dataset, batch_size=BATCH_SIZE, shuffle=True)
model = PolicyImpactModel()
trainer = Trainer(model, dataloader)

print("Training model...")
trainer.train(epochs=30)

print("Evaluating policies...")
test_data = DataLoader(dataset, batch_size=1, shuffle=False)
predictions = trainer.evaluate(test_data)

results = [{"policy": dataset.policies[i], "predicted_impact": predictions[i]} for i in range(len(predictions))]

processed_results = preprocess_results(results)

with open(os.path.join(args.out_dir, "final_info.json"), "w") as f:
json.dump(processed_results, f, indent=4)

print("Experiment complete. Results saved.")

if __name__ == "__main__":
main()
38 changes: 38 additions & 0 deletions templates/japan_declining_birth_rate/ideas.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
[
{
"Name": "financial_incentives_ai",
"Title": "AI-Optimized Financial Incentives for Boosting Birth Rates",
"Experiment": "1. Generate 100 financial policies with varying budgets and durations. 2. Train a neural network to model the impact of financial incentives on birth rates. 3. Evaluate and compare AI-generated financial policies to past real-world interventions. 4. Identify cost-effective financial strategies that maximize birth rate increases. 5. Analyze patterns in AI-generated policies to derive new insights for policy-making.",
"Interestingness": 9,
"Feasibility": 8,
"Novelty": 7,
"novel": true
},
{
"Name": "work_life_balance_ai",
"Title": "AI-Driven Work-Life Balance Strategies for Family Growth",
"Experiment": "1. Generate 100 work-life balance policies with varying budgets and durations. 2. Train a neural network to predict the impact of these policies on birth rates. 3. Evaluate AI-generated policies against historical policies in Japan. 4. Identify optimal policies that balance work productivity and birth rate increases. 5. Extract key insights from AI-generated policies to guide future policymaking.",
"Interestingness": 8,
"Feasibility": 9,
"Novelty": 8,
"novel": true
},
{
"Name": "childcare_infrastructure_ai",
"Title": "AI-Guided Expansion of Childcare Infrastructure",
"Experiment": "1. Generate 100 childcare policies with different investment levels and durations. 2. Train a neural network to model the impact of childcare expansion on birth rates. 3. Compare AI-generated policies to past government initiatives. 4. Identify the most cost-efficient strategies for improving childcare support. 5. Use AI-driven insights to optimize childcare policy recommendations.",
"Interestingness": 9,
"Feasibility": 9,
"Novelty": 8,
"novel": true
},
{
"Name": "community_engagement_ai",
"Title": "AI-Driven Community Engagement Policies for Family Growth",
"Experiment": "1. Generate 100 community engagement policies focused on peer support networks, parenting workshops, and family-oriented events. 2. Enhance the PolicyDataset class to include quantitative metrics for measuring community engagement effectiveness. 3. Train the neural network to model the impact of these community initiatives on birth rates. 4. Evaluate AI-generated community policies against existing community programs in Japan. 5. Analyze successful strategies to provide actionable insights for future community-centric family policies.",
"Interestingness": 8,
"Feasibility": 8,
"Novelty": 9,
"novel": false
}
]
Loading