The programs in this repository train and use a fully convolutional neural
network to take an image and classify its pixels. The network is
transfer-trained basing on the VGG-16 model using the approach described in
this paper by Jonathan Long et al. The software is generic and easily
extendable to any dataset, although I only tried with KITTI Road
Dataset and Cityscapes dataset so far. All you need to do to introduce a
new dataset is to create a new source_xxxxxx.py
file defining your dataset.
The definition is a class that contains seven attributes:
image_size
- self-evident, both horizontal and vertical dimention need to be divisible by 32num_classes
- number of classes that the model is supposed to handlelabel_colors
- a dictionary mapping a class number to a color; used for blending of the classification results with input imagenum_training
- number of training samplesnum_validation
- number of validation samplestrain_generator
- a generator producing training batchesvalid_generator
- a generator producing validation batches
See source_kitti.py
or source_cityscapes.py
for a concrete example. The
trainer picks the source based on the value of the --data-source
parameter.
Training the model on the KITTI Road Dataset essentially means that
infer.py
will be able to take images from a car's dashcam and paint the road
pink. It generalizes fairly well even to pretty complicated cases:
The model that produced the above images was trained for 500 epochs on the images contained in this zip file. The training program fills tensorboard with the loss summary and a sneak peek of the current performance on validation examples. The top row contains the ground truth and the bottom one the network's output.
This dataset is more complex than the previous one. It has fine image annotations for 29 classes of objects. The images are video frames taken in German cities and there is around 11GB of them.
The model that produced the images was trained for 150 epochs.