- Set up
- Run the judge
- Kaggle submission
- Compare solutions
- Solution compression
- Fetch the code golf scoreboard
- Prompts for LLMs
- Scripts
- CI/CD
- References
- Implement your solution for each task in the
solutions/directory. - Run the judge to test your solution.
- Submit your solution to the Kaggle competition.
uv sync
bash scripts/setup_hooks.sh # Setup git hooks for commit/push validationuv run judge [-h] [-w SOLUTION_DIR] [-a] [-t TASK_NUM] [-n NUM_PARALLEL] [-s] [-x TIMEOUT] [-l LOG_LEVEL]Options:
-h,--help: Show help message and exit-w,--solution_dir: Directory containing solution files-a,--run_all: Run all tasks-t,--task_num: Run a specific task (1 <= TASK_NUM <= 400, ignored if-ais set)-n,--num_parallel: Number of parallel processes (default:-1, the same as the number of logical cores)-s,--show_image: Show interactive image with Matplotlib-x,--timeout: Timeout in seconds for each task evaluation (default:300.0)-l,--log_level: Set log level (default:logging.INFO,20)
- Go to https://www.kaggle.com/settings
- Scroll down to the "API" section and click on "Create New Token"
- This will download a file called
kaggle.json. Move this file to~/.kaggle/mkdir -p ~/.kaggle cp ~/Downloads/kaggle.json ~/.kaggle/
- Make sure the permissions are correct:
chmod 600 ~/.kaggle/kaggle.json - Run below or you can directly upload
submission/submission.zipto the Kaggle competition page.# You can set environment variables like this # If not set, ~/.kaggle/kaggle.json will be used export KAGGLE_USERNAME="your_username" export KAGGLE_KEY="your_api_key" uv run submit # If you want to skip the submission.zip file creation, add -s option uv run submit -s
We provide a way to compare the performance of different solutions.
uv run compare [-h] [-w WORKING_DIR] [-n NUM_PARALLEL] [-d] [-x TIMEOUT] [-l LOG_LEVEL]Options:
-h,--help: Show help message and exit-w,--working_dir: Directory containing new solutions (the child file names should betaskXXX.py)-n,--num_parallel: Number of parallel processes (default:-1, the same as the number of logical cores)-d,--dry_run: Perform a trial run without making any changes-x,--timeout: Timeout in seconds for each task evaluation (default:300.0)-l,--log_level: Set log level (default:logging.INFO,20)
We provide a way to compress and decompress your solutions to minimize the size of your submission.
uv run compress [-h] [-s SOURCE] [-d DESTINATION] [-n NUM_PARALLEL]Options:
-h,--help: Show help message and exit-s,--source: Source directory containing solution files to compress (default:compression/raw)-d,--destination: Destination directory to save compressed files (default:compression/compressed)-n,--num_parallel: Number of parallel processes (default:-1, the same as the number of logical cores)
uv run decompress [-h] [-s SOURCE] [-d DESTINATION]Options:
-h,--help: Show help message and exit-s,--source: Source directory containing compressed solution files to decompress (default:solutions)-d,--destination: Destination directory to save decompressed files (default:compression/raw)
We provide automatically fetching of the code golf championship scoreboard.
uv run fetchYou can also compare scores between the current your solution and the scoreboard by running below command:
uv run scoreboardWe provide prompt generators to easily paste prompts into ChatGPT or Claude.
usage: prompt [-h] -t TASK [-c] [-n NUM_EXAMPLES] [-b BEST_BYTES]Options:
-h,--help: Show help message and exit-t,--task: Task number (1-400).-c,--prefer-compressed: Prefer compression/compressed over solutions.-n,--num-examples: Cap the number of train examples (default: all).-b,--best-bytes: Override best-known bytes.
-
make_submission.sh: Create a submission ZIP file from the solutions.bash scripts/make_submission.sh
-
setup_hooks.sh: Setup git hooks for commit/push validation.bash scripts/setup_hooks.sh
-
trim_spaces.sh: Trim unnecessary spaces from all files in the target directory.bash scripts/trim_spaces.sh [TARGET_DIR]
-
minify_solutions.py: Minify all solution files in the target directory usingpython-minifier.python working/minify_solutions.py [-h] [-d DIRECTORY] [-w]
Options:
-h,--help: Show help message and exit-d,--directory: Destination directory to minify solution files (default:compression/raw)-w,--overwrite: Overwrite existing files.
-
variable_optimization.py: Optimize variable names in all solution files in the target directory.python working/variable_optimization.py [-h] [-w WORKING_DIR] [-t TASK_NUM] [-c COUNT] [-m MAX_CHANGES] [-l LOG_LEVEL] [-n NUM_PARALLEL]
Options:
-h,--help: Show help message and exit-w,--working_dir: Directory containing new solutions (the child file names should betaskXXX.py)-t,--task_num: Run a specific task (1 <= TASK_NUM <= 400, ignored if-ais set)-c,--count: Number of optimization attempts (default:1000)-m,--max_changes: Maximum number of variable name changes per attempt (default:8)-l,--log_level: Set log level (default:logging.INFO,20)-n,--num_parallel: Number of parallel processes (default:-1, the same as the number of logical cores)
We use GitHub Actions to automate the testing and validation of submissions. The workflow is defined in .github/workflows/auto-judge.yml.
The GitHub Actions make another commit to the repository with the results of the validation. So you must run git pull before making any changes to the code.