Skip to content

update cpu and gpu type generators #181

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,6 @@ yarn-error.log*

.vercel
.env*.local

.venv

helpers/__pycache__/** */
500 changes: 252 additions & 248 deletions docs/references/cpu-types.md

Large diffs are not rendered by default.

76 changes: 38 additions & 38 deletions docs/references/gpu-types.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,42 +7,42 @@ The following list contains all GPU types available on RunPod.
For more information, see [GPU pricing](https://www.runpod.io/gpu-instance/pricing).

<!--
Table last generated: 2024-12-27
Table last generated: 2025-02-19
-->

| GPU ID | Display Name | Memory (GB) |
| ------------------------------ | -------------- | ----------- |
| NVIDIA A100 80GB PCIe | A100 PCIe | 80 |
| NVIDIA A100-SXM4-80GB | A100 SXM | 80 |
| NVIDIA A30 | A30 | 24 |
| NVIDIA A40 | A40 | 48 |
| NVIDIA H100 NVL | H100 NVL | 94 |
| NVIDIA H100 PCIe | H100 PCIe | 80 |
| NVIDIA H100 80GB HBM3 | H100 SXM | 80 |
| NVIDIA H200 | H200 SXM | 143 |
| NVIDIA L4 | L4 | 24 |
| NVIDIA L40 | L40 | 48 |
| NVIDIA L40S | L40S | 48 |
| AMD Instinct MI300X OAM | MI300X | 192 |
| NVIDIA RTX 2000 Ada Generation | RTX 2000 Ada | 16 |
| NVIDIA GeForce RTX 3070 | RTX 3070 | 8 |
| NVIDIA GeForce RTX 3080 | RTX 3080 | 10 |
| NVIDIA GeForce RTX 3080 Ti | RTX 3080 Ti | 12 |
| NVIDIA GeForce RTX 3090 | RTX 3090 | 24 |
| NVIDIA GeForce RTX 3090 Ti | RTX 3090 Ti | 24 |
| NVIDIA RTX 4000 Ada Generation | RTX 4000 Ada | 20 |
| NVIDIA GeForce RTX 4070 Ti | RTX 4070 Ti | 12 |
| NVIDIA GeForce RTX 4080 | RTX 4080 | 16 |
| NVIDIA GeForce RTX 4080 SUPER | RTX 4080 SUPER | 16 |
| NVIDIA GeForce RTX 4090 | RTX 4090 | 24 |
| NVIDIA RTX 5000 Ada Generation | RTX 5000 Ada | 32 |
| NVIDIA RTX 6000 Ada Generation | RTX 6000 Ada | 48 |
| NVIDIA RTX A2000 | RTX A2000 | 6 |
| NVIDIA RTX A4000 | RTX A4000 | 16 |
| NVIDIA RTX A4500 | RTX A4500 | 20 |
| NVIDIA RTX A5000 | RTX A5000 | 24 |
| NVIDIA RTX A6000 | RTX A6000 | 48 |
| Tesla V100-PCIE-16GB | Tesla V100 | 16 |
| Tesla V100-FHHL-16GB | V100 FHHL | 16 |
| Tesla V100-SXM2-16GB | V100 SXM2 | 16 |
| Tesla V100-SXM2-32GB | V100 SXM2 32GB | 32 |
| GPU ID | Display Name | Memory (GB) |
|------------------------------------|------------------|---------------|
| AMD Instinct MI300X OAM | MI300X | 192 |
| NVIDIA A100 80GB PCIe | A100 PCIe | 80 |
| NVIDIA A100-SXM4-80GB | A100 SXM | 80 |
| NVIDIA A30 | A30 | 24 |
| NVIDIA A40 | A40 | 48 |
| NVIDIA GeForce RTX 3070 | RTX 3070 | 8 |
| NVIDIA GeForce RTX 3080 | RTX 3080 | 10 |
| NVIDIA GeForce RTX 3080 Ti | RTX 3080 Ti | 12 |
| NVIDIA GeForce RTX 3090 | RTX 3090 | 24 |
| NVIDIA GeForce RTX 3090 Ti | RTX 3090 Ti | 24 |
| NVIDIA GeForce RTX 4070 Ti | RTX 4070 Ti | 12 |
| NVIDIA GeForce RTX 4080 | RTX 4080 | 16 |
| NVIDIA GeForce RTX 4080 SUPER | RTX 4080 SUPER | 16 |
| NVIDIA GeForce RTX 4090 | RTX 4090 | 24 |
| NVIDIA H100 80GB HBM3 | H100 SXM | 80 |
| NVIDIA H100 NVL | H100 NVL | 94 |
| NVIDIA H100 PCIe | H100 PCIe | 80 |
| NVIDIA H200 | H200 SXM | 141 |
| NVIDIA L4 | L4 | 24 |
| NVIDIA L40 | L40 | 48 |
| NVIDIA L40S | L40S | 48 |
| NVIDIA RTX 2000 Ada Generation | RTX 2000 Ada | 16 |
| NVIDIA RTX 4000 Ada Generation | RTX 4000 Ada | 20 |
| NVIDIA RTX 4000 SFF Ada Generation | RTX 4000 Ada SFF | 20 |
| NVIDIA RTX 5000 Ada Generation | RTX 5000 Ada | 32 |
| NVIDIA RTX 6000 Ada Generation | RTX 6000 Ada | 48 |
| NVIDIA RTX A2000 | RTX A2000 | 6 |
| NVIDIA RTX A4000 | RTX A4000 | 16 |
| NVIDIA RTX A4500 | RTX A4500 | 20 |
| NVIDIA RTX A5000 | RTX A5000 | 24 |
| NVIDIA RTX A6000 | RTX A6000 | 48 |
| Tesla V100-FHHL-16GB | V100 FHHL | 16 |
| Tesla V100-PCIE-16GB | Tesla V100 | 16 |
| Tesla V100-SXM2-16GB | V100 SXM2 | 16 |
| Tesla V100-SXM2-32GB | V100 SXM2 32GB | 32 |
75 changes: 29 additions & 46 deletions helpers/gpu_types.py
Original file line number Diff line number Diff line change
@@ -1,69 +1,52 @@
import os
from datetime import datetime

import pandas as pd
import requests
from dotenv import load_dotenv
from tabulate import tabulate

load_dotenv()
response = requests.post(
"https://api.runpod.io/graphql",
headers={
"content-type": "application/json"
},
json={
"query": "query GpuTypes { gpuTypes { id displayName memoryInGb } }"
})

response.raise_for_status()

api_key = os.getenv("API_KEY")
# URL and headers for the POST request
url = "https://api.runpod.io/graphql"
headers = {"content-type": "application/json", "api_key": api_key}
gpu_data = response.json()
gpus = gpu_data["data"]["gpuTypes"]

# The GraphQL query
data = {"query": "query GpuTypes { gpuTypes { id displayName memoryInGb } }"}
gpus_df = pd.DataFrame(gpus)

# Send the POST request
response = requests.post(url, headers=headers, json=data)
gpus_df = gpus_df[(gpus_df["id"].str.lower() != "unknown")]

# Check if the request was successful
if response.status_code == 200:
# Parse the response JSON
gpu_data = response.json()
gpus_df.sort_values(by="displayName").reset_index(drop=True, inplace=True)

# Extract GPU data
gpus = gpu_data["data"]["gpuTypes"]
file_path = os.path.join(
os.path.dirname(__file__), "../docs/references/gpu-types.md"
)

# Sort the GPUs by display name
gpus_sorted = sorted(gpus, key=lambda x: x["displayName"])
table = tabulate(gpus_df, headers=["GPU ID", "Display Name", "Memory (GB)"], tablefmt="github", showindex=False)

# Writing to a markdown file
# relative path
# os.path.join(os.path.dirname(__file__), "gpu-types.md")
file_path = os.path.join(
os.path.dirname(__file__), "../docs/references/gpu-types.md"
)

with open(file_path, "w") as file:
# Write the table headers
date = datetime.now().strftime("%Y-%m-%d")
file.write(
f"""---
with open(file_path, "w") as file:
# Write the table headers
date = datetime.now().strftime("%Y-%m-%d")
file.write(
f"""---
title: GPU types
---

The following list contains all GPU types available on RunPod.

For more information, see [GPU pricing](https://www.runpod.io/gpu-instance/pricing).

<!--
Table last generated: {date}
-->
| GPU ID | Display Name | Memory (GB) |
| ------ | ------------ | ----------- |
"""
)

# Write each GPU data as a row in the table
for gpu in gpus_sorted:
if gpu["id"] == "unknown":
pass
else:
file.write(
f"| {gpu['id']} | {gpu['displayName']} | {gpu['memoryInGb']} |\n"
)
{table}
""")

print("Markdown file with GPU data created successfully.")
else:
print("Failed to retrieve data: ", response.status_code)
print("Markdown file with GPU data created successfully.")
114 changes: 32 additions & 82 deletions helpers/sls_cpu_types.py
Original file line number Diff line number Diff line change
@@ -1,94 +1,47 @@
import io
import os
from datetime import datetime

import pandas as pd
import requests
from dotenv import load_dotenv
from tabulate import tabulate

load_dotenv()
response = requests.post(
"https://api.runpod.io/graphql",
headers={
"content-type": "application/json"
},
json={
"query": "query CpuTypes { cpuTypes { displayName cores threadsPerCore } }"
})

api_key = os.getenv("API_KEY")
response.raise_for_status()

# URL and headers for the POST request
url = "https://api.runpod.io/graphql"
headers = {"content-type": "application/json", "api_key": api_key}
cpu_data = response.json()
cpus = cpu_data["data"]["cpuTypes"]

# The GraphQL query
data = {"query": "query CpuTypes { cpuTypes { displayName cores threadsPerCore } }"}
cpus_df = pd.DataFrame(cpus)

# Send the POST request
response = requests.post(url, headers=headers, json=data)
cpus_df = cpus_df[
(cpus_df["displayName"].str.lower() != "unknown")
& (~cpus_df["cores"].isna())
& (~cpus_df["threadsPerCore"].isna())
]

# Check if the request was successful
if response.status_code == 200:
# Parse the response JSON
cpu_data = response.json()
cpus_df['displayName'].str.replace(r'\s{2,}', ' ', regex=True).str.strip()
cpus_df.dropna(how="all")
cpus_df.sort_values(by="displayName").reset_index(drop=True, inplace=True)

# Extract CPU data
cpus = cpu_data["data"]["cpuTypes"]
file_path = os.path.join(
os.path.dirname(__file__), "../docs/references/cpu-types.md"
)

# Filter out empty CPU types and rows where all values are NaN
filtered_cpus = [
cpu
for cpu in cpus
if cpu["displayName"]
and cpu["displayName"].lower() != "unknown"
and not pd.isna(cpu["cores"])
and not pd.isna(cpu["threadsPerCore"])
and not all(pd.isna(value) for value in cpu.values())
]
table = tabulate(cpus_df, headers=["Display Name", "Cores", "Threads Per Core"], tablefmt="github", showindex=False)

# Convert to DataFrame
new_df = pd.DataFrame(filtered_cpus)

# Writing to a markdown file
file_path = os.path.join(
os.path.dirname(__file__), "../docs/references/cpu-types.md"
)

# Check if the file already exists
if os.path.exists(file_path):
with open(file_path, "r") as file:
lines = file.readlines()

# Find where the table ends
table_end_index = 0
for i, line in enumerate(lines):
if line.strip() == "" and i > 0:
table_end_index = i
break

# Extract the current table
current_table = "".join(lines[:table_end_index])

# Convert the current table to a DataFrame
current_df = pd.read_csv(io.StringIO(current_table), sep="|").iloc[:, 1:-1]

# Append the new data to the current table
updated_df = pd.concat([current_df, new_df], ignore_index=True)

else:
# If the file does not exist, start a new DataFrame
updated_df = new_df

# Remove rows where all values are NaN
updated_df = updated_df.dropna(how="all")

# Sort the DataFrame alphabetically by displayName
updated_df = updated_df.sort_values(by="displayName").reset_index(drop=True)

# Convert the updated DataFrame to markdown table format
updated_table = tabulate(
updated_df, headers="keys", tablefmt="pipe", showindex=False
)

with open(file_path, "w") as file:
# Write the headers and table
date = datetime.now().strftime("%Y-%m-%d")
file.write(
f"""---
with open(file_path, "w") as file:
# Write the headers and table
date = datetime.now().strftime("%Y-%m-%d")
file.write(
f"""---
title: Serverless CPU types
---

Expand All @@ -97,10 +50,7 @@
<!--
Table last generated: {date}
-->
{updated_table}
"""
)
{table}
""")

print("Markdown file with CPU data updated successfully.")
else:
print("Failed to retrieve data: ", response.status_code)
print("Markdown file with CPU data updated successfully.")
Loading