Hi @richardxp888 🤗
I'm Niels and work as part of the open-source team at Hugging Face. I discovered your work through Hugging Face's daily papers as yours got featured: https://huggingface.co/papers/2604.04202.
The paper page lets people discuss about your paper and lets them find artifacts about it (your dataset for instance),
you can also claim the paper as yours which will show up on your public profile at HF, add Github and project page URLs.
Would you like to host the ClawArena benchmark dataset you've released on https://huggingface.co/datasets?
I see you're currently hosting the data files within your GitHub repository. Hosting on Hugging Face will give you more visibility/enable better discoverability, and will also allow people to do:
from datasets import load_dataset
dataset = load_dataset("aiming-lab/ClawArena")
If you're down, leaving a guide here: https://huggingface.co/docs/datasets/loading.
We also support various formats that are common for agent benchmarks.
Besides that, there's the dataset viewer which allows people to quickly explore the scenarios and evaluation rounds in the browser.
After uploaded, we can also link the dataset to the paper page (read here) so people can discover your work.
Let me know if you're interested/need any guidance.
Kind regards,
Niels
Hi @richardxp888 🤗
I'm Niels and work as part of the open-source team at Hugging Face. I discovered your work through Hugging Face's daily papers as yours got featured: https://huggingface.co/papers/2604.04202.
The paper page lets people discuss about your paper and lets them find artifacts about it (your dataset for instance),
you can also claim the paper as yours which will show up on your public profile at HF, add Github and project page URLs.
Would you like to host the ClawArena benchmark dataset you've released on https://huggingface.co/datasets?
I see you're currently hosting the data files within your GitHub repository. Hosting on Hugging Face will give you more visibility/enable better discoverability, and will also allow people to do:
If you're down, leaving a guide here: https://huggingface.co/docs/datasets/loading.
We also support various formats that are common for agent benchmarks.
Besides that, there's the dataset viewer which allows people to quickly explore the scenarios and evaluation rounds in the browser.
After uploaded, we can also link the dataset to the paper page (read here) so people can discover your work.
Let me know if you're interested/need any guidance.
Kind regards,
Niels