RubikNet is a project exploring how AI and robotics can solve the Rubik’s Cube using Reinforcement Learning, Deep Learning, and Search-based Solvers.
This repository is organized into three main components:
This folder contains our experiments with RL agents trained on classic control environments.
We have implemented and solved the following environments:
- CartPole – balancing a pole on a cart
- Taxi – navigating a taxi to pick up and drop passengers
- MountainCar – driving a car up a steep hill with limited power
- Blackjack – learning strategies for the card game using Monte Carlo methods
These experiments build the foundation for applying RL to more complex tasks like solving the Rubik’s Cube.
This folder is designed as a introduction on deep learning.
- Covers the basics of neural networks and how they learn.
- Hands-on projects on:
- Fashion Classification (Fashion-MNIST)
- Helps build the foundation needed before moving on to Reinforcement Learning and Cube Solvers.
This folder contains the solver logic for the Rubik’s Cube.
- Implemented ADI(Autodidactic Iteration) or Self Supervised Learning for cube solving.
Check out the ReadMe of this folder to implement the whole solver.
We recommend using uv for environment management.
To set up the project:
# clone the repository
git clone https://github.com/Archaive16/RubikNet
# Install uv (if not already installed)
pip install uv
# Sync dependencies
uv sync- SRA VJTI, Eklavya 2025
- Special thanks to our mentors, Ansh Semwal and Akash Kawle.




