|
1 |
| -### Description |
2 |
| - |
3 |
| -The repo is forked from https://github.com/ultralytics/yolov3 and contains inference and training code for YOLOv3 in PyTorch. |
4 |
| - |
5 |
| -### Setup |
6 |
| -Make sure to have followed `setup-gcloud.md` and be logged into the corresponding GCloud instance. In particular, make sure to have run the `setup.py` command as directed there. |
7 |
| - |
8 |
| -Then run the following, note that the generate data script has to have links that are the gs: version and not the 'storage.api' version: |
9 |
| -``` |
10 |
| -source /home/mit-dut-driverless-internal/venvs/cv/bin/activate |
11 |
| -# log in through the browser on your local machine |
12 |
| -gcloud auth application-default login |
13 |
| -# Note that the csv_uri must have at least 100 images in it and it must have the gs: locations referenced rather than their local locations |
14 |
| -python generate_dataset_csvs.py --csv_uris \ |
15 |
| - gs://mit-dut-driverless-internal/data-labels/fullFrameTest.csv \ |
16 |
| - --output_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/january-experiments/fullFrameNonSquareTest/ |
17 |
| -
|
18 |
| -python train.py --model_cfg model_cfgs/yolov3_80class.cfg \ |
19 |
| - --validate_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/hive-0-1-2-test.csv \ |
20 |
| - --train_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/hive-0-1-2-train.csv \ |
21 |
| - --output_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/quickstart/ \ |
22 |
| - --study_name color_80class \ |
23 |
| - --weights_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/december-experiments/92da791a96de485895ea219f7035c2aa/36.weights 2>&1 | tee results/color_416_baseline.log |
24 |
| -``` |
25 |
| - |
26 |
| -The same command in one line: |
27 |
| -``` |
28 |
| -python train.py --model_cfg model_cfgs/yolov3_80class.cfg --validate_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/hive-0-1-2-test.csv --train_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/hive-0-1-2-train.csv --output_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/january-experiments/color_416_baseline --study_name color_416_baseline 2>&1 | tee logs/color_416_baseline.log |
29 |
| -``` |
30 |
| - |
31 |
| -Back-up the logs: |
32 |
| -``` |
33 |
| -gsutil cp logs/color_416_baseline.log gs://mit-dut-driverless-internal/vectorized-yolov3-training/january-experiments/color_416_baseline/color_416_baseline.log |
34 |
| -``` |
35 |
| - |
36 |
| -To run detection on one image, and visualize the results: |
37 |
| -``` |
38 |
| -python detect.py --weights_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/sample-yolov3.weights --image_uri gs://mit-dut-driverless-external/HiveAIRound2/vid_38_frame_956.jpg --output_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/january-experiments/fullFrameNonSquareTest2/vid38f956.jpg --img_width 2048 --img_height 1536 --model_cfg model_cfgs/yolov3_80class_fullFrame.cfg |
39 |
| -``` |
40 |
| -To run detection on multiple images of your choosing, write a bash file similar to: |
41 |
| -``` |
42 |
| -run_scripts/run_detect.sh |
43 |
| -``` |
44 |
| -And to copy the visualizations to a local machine (with GCloud Client installed): `gsutil -m cp -r gs://mit-dut-driverless-internal/vectorized-yolov3-training/january-experiments/color_416_baseline/visualization .` |
45 |
| - |
46 |
| -You can also add the following to your ~/.bashrc to make things easier: |
47 |
| -``` |
48 |
| -cd /home/mit-dut-driverless-internal/cv-cone-town/vectorized-yolov3/ |
49 |
| -source ../../venvs/cv/bin/activate |
50 |
| -``` |
51 |
| - |
52 |
| -You can create a video from a list of extracted frames from a video using: |
53 |
| -``` |
54 |
| -run_scripts/make_video.sh |
55 |
| -``` |
56 |
| -You'll need to adjust the hardcoded paths at the top of the file. |
57 |
| - |
58 |
| -To-Do: |
59 |
| -Splits: |
60 |
| - - Get splits working with online processing, re-train models |
61 |
| - - Understand why splits work so well |
62 |
| - - Add splits to Xavier processing module |
63 |
| -- Make experiment folders readable |
64 |
| - |
65 |
| -### Bookmarked Models |
66 |
| -Color Full-Frame: gs://mit-dut-driverless-internal/vectorized-yolov3-training/december-experiments/92da791a96de485895ea219f7035c2aa/36.weights |
67 |
| -Black and White Full-Frame: gs://mit-dut-driverless-internal/vectorized-yolov3-training/december-experiments/2c5485a8ee6847808459f54cac50ae8e/64.weights |
68 |
| - |
69 |
| -Color Split-Frame: gs://mit-dut-driverless-internal/vectorized-yolov3-training/december-experiments/56070f8a1e454b2383c12d0fec37e3dc/104.weights |
70 |
| - |
71 |
| -Black and White Split-Frame: gs://mit-dut-driverless-internal/vectorized-yolov3-training/december-experiments/701e0c805b4d4052a1798a4d9c3c5914/68.weights |
72 |
| - |
| 1 | +### Description |
| 2 | + |
| 3 | +The repo is forked from https://github.com/ultralytics/yolov3 and contains inference and training code for YOLOv3 in PyTorch. |
| 4 | + |
| 5 | +### Setup |
| 6 | +Make sure to have followed `setup-gcloud.md` and be logged into the corresponding GCloud instance. In particular, make sure to have run the `setup.py` command as directed there. |
| 7 | + |
| 8 | +Then run the following, note that the generate data script has to have links that are the gs: version and not the 'storage.api' version: |
| 9 | +``` |
| 10 | +source /home/mit-dut-driverless-internal/venvs/cv/bin/activate |
| 11 | +# log in through the browser on your local machine |
| 12 | +gcloud auth application-default login |
| 13 | +# Note that the csv_uri must have at least 100 images in it and it must have the gs: locations referenced rather than their local locations |
| 14 | +python generate_dataset_csvs.py --csv_uris \ |
| 15 | + gs://mit-dut-driverless-internal/data-labels/fullFrameTest.csv \ |
| 16 | + --output_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/january-experiments/fullFrameNonSquareTest/ |
| 17 | +
|
| 18 | +python train.py --model_cfg model_cfgs/yolov3_80class.cfg \ |
| 19 | + --validate_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/hive-0-1-2-test.csv \ |
| 20 | + --train_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/hive-0-1-2-train.csv \ |
| 21 | + --output_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/quickstart/ \ |
| 22 | + --study_name color_80class \ |
| 23 | + --weights_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/december-experiments/92da791a96de485895ea219f7035c2aa/36.weights 2>&1 | tee results/color_416_baseline.log |
| 24 | +``` |
| 25 | + |
| 26 | +The same command in one line: |
| 27 | +``` |
| 28 | +python train.py --model_cfg model_cfgs/yolov3_80class.cfg --validate_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/hive-0-1-2-test.csv --train_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/hive-0-1-2-train.csv --output_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/january-experiments/color_416_baseline --study_name color_416_baseline 2>&1 | tee logs/color_416_baseline.log |
| 29 | +``` |
| 30 | + |
| 31 | +Back-up the logs: |
| 32 | +``` |
| 33 | +gsutil cp logs/color_416_baseline.log gs://mit-dut-driverless-internal/vectorized-yolov3-training/january-experiments/color_416_baseline/color_416_baseline.log |
| 34 | +``` |
| 35 | + |
| 36 | +To run detection on one image, and visualize the results: |
| 37 | +``` |
| 38 | +python detect.py --weights_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/sample-yolov3.weights --image_uri gs://mit-dut-driverless-external/HiveAIRound2/vid_38_frame_956.jpg --output_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/january-experiments/fullFrameNonSquareTest2/vid38f956.jpg --img_width 2048 --img_height 1536 --model_cfg model_cfgs/yolov3_80class_fullFrame.cfg |
| 39 | +``` |
| 40 | +To run detection on multiple images of your choosing, write a bash file similar to: |
| 41 | +``` |
| 42 | +run_scripts/run_detect.sh |
| 43 | +``` |
| 44 | +And to copy the visualizations to a local machine (with GCloud Client installed): `gsutil -m cp -r gs://mit-dut-driverless-internal/vectorized-yolov3-training/january-experiments/color_416_baseline/visualization .` |
| 45 | + |
| 46 | +You can also add the following to your ~/.bashrc to make things easier: |
| 47 | +``` |
| 48 | +cd /home/mit-dut-driverless-internal/cv-cone-town/vectorized-yolov3/ |
| 49 | +source ../../venvs/cv/bin/activate |
| 50 | +``` |
| 51 | + |
| 52 | +You can create a video from a list of extracted frames from a video using: |
| 53 | +``` |
| 54 | +run_scripts/make_video.sh |
| 55 | +``` |
| 56 | +You'll need to adjust the hardcoded paths at the top of the file. |
| 57 | + |
| 58 | +To-Do: |
| 59 | +Splits: |
| 60 | + - Get splits working with online processing, re-train models |
| 61 | + - Understand why splits work so well |
| 62 | + - Add splits to Xavier processing module |
| 63 | +- Make experiment folders readable |
| 64 | + |
| 65 | +### Bookmarked Models |
| 66 | +Color Full-Frame: gs://mit-dut-driverless-internal/vectorized-yolov3-training/december-experiments/92da791a96de485895ea219f7035c2aa/36.weights |
| 67 | +Black and White Full-Frame: gs://mit-dut-driverless-internal/vectorized-yolov3-training/december-experiments/2c5485a8ee6847808459f54cac50ae8e/64.weights |
| 68 | + |
| 69 | +Color Split-Frame: gs://mit-dut-driverless-internal/vectorized-yolov3-training/december-experiments/56070f8a1e454b2383c12d0fec37e3dc/104.weights |
| 70 | + |
| 71 | +Black and White Split-Frame: gs://mit-dut-driverless-internal/vectorized-yolov3-training/december-experiments/701e0c805b4d4052a1798a4d9c3c5914/68.weights |
| 72 | + |
0 commit comments