Skip to content

Latest commit

 

History

History
61 lines (38 loc) · 2.63 KB

README.md

File metadata and controls

61 lines (38 loc) · 2.63 KB

DanceNet - Dance generator using Variational Autoencoder, LSTM and Mixture Density Network. (Keras)

License: MIT Run on FloydHub DOI

This is an attempt to create a dance generator AI, inspired by this video by @carykh

Main components:

  • Variational autoencoder
  • LSTM + Mixture Density Layer

Requirements:

  • Python version = 3.5.2

    Packages

    • keras==2.2.0
    • sklearn==0.19.1
    • numpy==1.14.3
    • opencv-python==3.4.1

Dataset

https://www.youtube.com/watch?v=NdSqAAT28v0 This is the video used for training.

How to run locally

  • Download the trained weights from here. and extract it to the dancenet dir.
  • Run dancegen.ipynb

How to run in your browser

Run on FloydHub

  • Click the button above to open this code in a FloydHub workspace (the trained weights dataset will be automatically attached to the environment)
  • Run dancegen.ipynb

Training from scratch

  • fill dance sequence images labeled as 1.jpg, 2.jpg ... in imgs/ folder
  • run model.py
  • run gen_lv.py to encode images
  • run video_from_lv.py to test decoded video
  • run jupyter notebook dancegen.ipynb to train dancenet and generate new video.

References