This is an attempt to create a dance generator AI, inspired by this video by @carykh
- Variational autoencoder
- LSTM + Mixture Density Layer
-
Python version = 3.5.2
- keras==2.2.0
- sklearn==0.19.1
- numpy==1.14.3
- opencv-python==3.4.1
https://www.youtube.com/watch?v=NdSqAAT28v0 This is the video used for training.
- Download the trained weights from here. and extract it to the dancenet dir.
- Run dancegen.ipynb
- Click the button above to open this code in a FloydHub workspace (the trained weights dataset will be automatically attached to the environment)
- Run dancegen.ipynb
- fill dance sequence images labeled as
1.jpg
,2.jpg
... inimgs/
folder - run
model.py
- run
gen_lv.py
to encode images - run
video_from_lv.py
to test decoded video - run jupyter notebook
dancegen.ipynb
to train dancenet and generate new video.