Skip to content

Commit e51af17

Browse files
committed
update installation
1 parent 0ba1ad3 commit e51af17

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

51 files changed

+7682
-7682
lines changed

.gitattributes

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,2 @@
1-
*.pt filter=lfs diff=lfs merge=lfs -text
2-
*.weights filter=lfs diff=lfs merge=lfs -text
1+
*.pt filter=lfs diff=lfs merge=lfs -text
2+
*.weights filter=lfs diff=lfs merge=lfs -text

.gitignore

+22-22
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,22 @@
1-
.pth
2-
*.mp4
3-
*.jpg
4-
*.png
5-
*.swp
6-
**/utils/gs/
7-
**/.DS_Store
8-
9-
**/.idea
10-
**/*.egg-info/
11-
**/__pycache__
12-
**/build
13-
*.py[cod]
14-
**/bin/*
15-
*.log
16-
*.error
17-
*.a
18-
*.so
19-
*.so.2
20-
.ipynb_checkpoints
21-
.python-version
22-
*/gs/*
1+
.pth
2+
*.mp4
3+
*.jpg
4+
*.png
5+
*.swp
6+
**/utils/gs/
7+
**/.DS_Store
8+
9+
**/.idea
10+
**/*.egg-info/
11+
**/__pycache__
12+
**/build
13+
*.py[cod]
14+
**/bin/*
15+
*.log
16+
*.error
17+
*.a
18+
*.so
19+
*.so.2
20+
.ipynb_checkpoints
21+
.python-version
22+
*/gs/*

CVC-YOLOv3/README.md

+72-72
Original file line numberDiff line numberDiff line change
@@ -1,72 +1,72 @@
1-
### Description
2-
3-
The repo is forked from https://github.com/ultralytics/yolov3 and contains inference and training code for YOLOv3 in PyTorch.
4-
5-
### Setup
6-
Make sure to have followed `setup-gcloud.md` and be logged into the corresponding GCloud instance. In particular, make sure to have run the `setup.py` command as directed there.
7-
8-
Then run the following, note that the generate data script has to have links that are the gs: version and not the 'storage.api' version:
9-
```
10-
source /home/mit-dut-driverless-internal/venvs/cv/bin/activate
11-
# log in through the browser on your local machine
12-
gcloud auth application-default login
13-
# Note that the csv_uri must have at least 100 images in it and it must have the gs: locations referenced rather than their local locations
14-
python generate_dataset_csvs.py --csv_uris \
15-
gs://mit-dut-driverless-internal/data-labels/fullFrameTest.csv \
16-
--output_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/january-experiments/fullFrameNonSquareTest/
17-
18-
python train.py --model_cfg model_cfgs/yolov3_80class.cfg \
19-
--validate_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/hive-0-1-2-test.csv \
20-
--train_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/hive-0-1-2-train.csv \
21-
--output_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/quickstart/ \
22-
--study_name color_80class \
23-
--weights_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/december-experiments/92da791a96de485895ea219f7035c2aa/36.weights 2>&1 | tee results/color_416_baseline.log
24-
```
25-
26-
The same command in one line:
27-
```
28-
python train.py --model_cfg model_cfgs/yolov3_80class.cfg --validate_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/hive-0-1-2-test.csv --train_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/hive-0-1-2-train.csv --output_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/january-experiments/color_416_baseline --study_name color_416_baseline 2>&1 | tee logs/color_416_baseline.log
29-
```
30-
31-
Back-up the logs:
32-
```
33-
gsutil cp logs/color_416_baseline.log gs://mit-dut-driverless-internal/vectorized-yolov3-training/january-experiments/color_416_baseline/color_416_baseline.log
34-
```
35-
36-
To run detection on one image, and visualize the results:
37-
```
38-
python detect.py --weights_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/sample-yolov3.weights --image_uri gs://mit-dut-driverless-external/HiveAIRound2/vid_38_frame_956.jpg --output_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/january-experiments/fullFrameNonSquareTest2/vid38f956.jpg --img_width 2048 --img_height 1536 --model_cfg model_cfgs/yolov3_80class_fullFrame.cfg
39-
```
40-
To run detection on multiple images of your choosing, write a bash file similar to:
41-
```
42-
run_scripts/run_detect.sh
43-
```
44-
And to copy the visualizations to a local machine (with GCloud Client installed): `gsutil -m cp -r gs://mit-dut-driverless-internal/vectorized-yolov3-training/january-experiments/color_416_baseline/visualization .`
45-
46-
You can also add the following to your ~/.bashrc to make things easier:
47-
```
48-
cd /home/mit-dut-driverless-internal/cv-cone-town/vectorized-yolov3/
49-
source ../../venvs/cv/bin/activate
50-
```
51-
52-
You can create a video from a list of extracted frames from a video using:
53-
```
54-
run_scripts/make_video.sh
55-
```
56-
You'll need to adjust the hardcoded paths at the top of the file.
57-
58-
To-Do:
59-
Splits:
60-
- Get splits working with online processing, re-train models
61-
- Understand why splits work so well
62-
- Add splits to Xavier processing module
63-
- Make experiment folders readable
64-
65-
### Bookmarked Models
66-
Color Full-Frame: gs://mit-dut-driverless-internal/vectorized-yolov3-training/december-experiments/92da791a96de485895ea219f7035c2aa/36.weights
67-
Black and White Full-Frame: gs://mit-dut-driverless-internal/vectorized-yolov3-training/december-experiments/2c5485a8ee6847808459f54cac50ae8e/64.weights
68-
69-
Color Split-Frame: gs://mit-dut-driverless-internal/vectorized-yolov3-training/december-experiments/56070f8a1e454b2383c12d0fec37e3dc/104.weights
70-
71-
Black and White Split-Frame: gs://mit-dut-driverless-internal/vectorized-yolov3-training/december-experiments/701e0c805b4d4052a1798a4d9c3c5914/68.weights
72-
1+
### Description
2+
3+
The repo is forked from https://github.com/ultralytics/yolov3 and contains inference and training code for YOLOv3 in PyTorch.
4+
5+
### Setup
6+
Make sure to have followed `setup-gcloud.md` and be logged into the corresponding GCloud instance. In particular, make sure to have run the `setup.py` command as directed there.
7+
8+
Then run the following, note that the generate data script has to have links that are the gs: version and not the 'storage.api' version:
9+
```
10+
source /home/mit-dut-driverless-internal/venvs/cv/bin/activate
11+
# log in through the browser on your local machine
12+
gcloud auth application-default login
13+
# Note that the csv_uri must have at least 100 images in it and it must have the gs: locations referenced rather than their local locations
14+
python generate_dataset_csvs.py --csv_uris \
15+
gs://mit-dut-driverless-internal/data-labels/fullFrameTest.csv \
16+
--output_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/january-experiments/fullFrameNonSquareTest/
17+
18+
python train.py --model_cfg model_cfgs/yolov3_80class.cfg \
19+
--validate_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/hive-0-1-2-test.csv \
20+
--train_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/hive-0-1-2-train.csv \
21+
--output_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/quickstart/ \
22+
--study_name color_80class \
23+
--weights_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/december-experiments/92da791a96de485895ea219f7035c2aa/36.weights 2>&1 | tee results/color_416_baseline.log
24+
```
25+
26+
The same command in one line:
27+
```
28+
python train.py --model_cfg model_cfgs/yolov3_80class.cfg --validate_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/hive-0-1-2-test.csv --train_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/hive-0-1-2-train.csv --output_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/january-experiments/color_416_baseline --study_name color_416_baseline 2>&1 | tee logs/color_416_baseline.log
29+
```
30+
31+
Back-up the logs:
32+
```
33+
gsutil cp logs/color_416_baseline.log gs://mit-dut-driverless-internal/vectorized-yolov3-training/january-experiments/color_416_baseline/color_416_baseline.log
34+
```
35+
36+
To run detection on one image, and visualize the results:
37+
```
38+
python detect.py --weights_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/sample-yolov3.weights --image_uri gs://mit-dut-driverless-external/HiveAIRound2/vid_38_frame_956.jpg --output_uri gs://mit-dut-driverless-internal/vectorized-yolov3-training/january-experiments/fullFrameNonSquareTest2/vid38f956.jpg --img_width 2048 --img_height 1536 --model_cfg model_cfgs/yolov3_80class_fullFrame.cfg
39+
```
40+
To run detection on multiple images of your choosing, write a bash file similar to:
41+
```
42+
run_scripts/run_detect.sh
43+
```
44+
And to copy the visualizations to a local machine (with GCloud Client installed): `gsutil -m cp -r gs://mit-dut-driverless-internal/vectorized-yolov3-training/january-experiments/color_416_baseline/visualization .`
45+
46+
You can also add the following to your ~/.bashrc to make things easier:
47+
```
48+
cd /home/mit-dut-driverless-internal/cv-cone-town/vectorized-yolov3/
49+
source ../../venvs/cv/bin/activate
50+
```
51+
52+
You can create a video from a list of extracted frames from a video using:
53+
```
54+
run_scripts/make_video.sh
55+
```
56+
You'll need to adjust the hardcoded paths at the top of the file.
57+
58+
To-Do:
59+
Splits:
60+
- Get splits working with online processing, re-train models
61+
- Understand why splits work so well
62+
- Add splits to Xavier processing module
63+
- Make experiment folders readable
64+
65+
### Bookmarked Models
66+
Color Full-Frame: gs://mit-dut-driverless-internal/vectorized-yolov3-training/december-experiments/92da791a96de485895ea219f7035c2aa/36.weights
67+
Black and White Full-Frame: gs://mit-dut-driverless-internal/vectorized-yolov3-training/december-experiments/2c5485a8ee6847808459f54cac50ae8e/64.weights
68+
69+
Color Split-Frame: gs://mit-dut-driverless-internal/vectorized-yolov3-training/december-experiments/56070f8a1e454b2383c12d0fec37e3dc/104.weights
70+
71+
Black and White Split-Frame: gs://mit-dut-driverless-internal/vectorized-yolov3-training/december-experiments/701e0c805b4d4052a1798a4d9c3c5914/68.weights
72+

CVC-YOLOv3/csrc/ROIAlign.h

+46-46
Original file line numberDiff line numberDiff line change
@@ -1,46 +1,46 @@
1-
// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
2-
#pragma once
3-
4-
#include "cpu/vision.h"
5-
6-
#ifdef WITH_CUDA
7-
#include "cuda/vision.h"
8-
#endif
9-
10-
// Interface for Python
11-
at::Tensor ROIAlign_forward(const at::Tensor& input,
12-
const at::Tensor& rois,
13-
const float spatial_scale,
14-
const int pooled_height,
15-
const int pooled_width,
16-
const int sampling_ratio) {
17-
if (input.type().is_cuda()) {
18-
#ifdef WITH_CUDA
19-
return ROIAlign_forward_cuda(input, rois, spatial_scale, pooled_height, pooled_width, sampling_ratio);
20-
#else
21-
AT_ERROR("Not compiled with GPU support");
22-
#endif
23-
}
24-
return ROIAlign_forward_cpu(input, rois, spatial_scale, pooled_height, pooled_width, sampling_ratio);
25-
}
26-
27-
at::Tensor ROIAlign_backward(const at::Tensor& grad,
28-
const at::Tensor& rois,
29-
const float spatial_scale,
30-
const int pooled_height,
31-
const int pooled_width,
32-
const int batch_size,
33-
const int channels,
34-
const int height,
35-
const int width,
36-
const int sampling_ratio) {
37-
if (grad.type().is_cuda()) {
38-
#ifdef WITH_CUDA
39-
return ROIAlign_backward_cuda(grad, rois, spatial_scale, pooled_height, pooled_width, batch_size, channels, height, width, sampling_ratio);
40-
#else
41-
AT_ERROR("Not compiled with GPU support");
42-
#endif
43-
}
44-
AT_ERROR("Not implemented on the CPU");
45-
}
46-
1+
// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
2+
#pragma once
3+
4+
#include "cpu/vision.h"
5+
6+
#ifdef WITH_CUDA
7+
#include "cuda/vision.h"
8+
#endif
9+
10+
// Interface for Python
11+
at::Tensor ROIAlign_forward(const at::Tensor& input,
12+
const at::Tensor& rois,
13+
const float spatial_scale,
14+
const int pooled_height,
15+
const int pooled_width,
16+
const int sampling_ratio) {
17+
if (input.type().is_cuda()) {
18+
#ifdef WITH_CUDA
19+
return ROIAlign_forward_cuda(input, rois, spatial_scale, pooled_height, pooled_width, sampling_ratio);
20+
#else
21+
AT_ERROR("Not compiled with GPU support");
22+
#endif
23+
}
24+
return ROIAlign_forward_cpu(input, rois, spatial_scale, pooled_height, pooled_width, sampling_ratio);
25+
}
26+
27+
at::Tensor ROIAlign_backward(const at::Tensor& grad,
28+
const at::Tensor& rois,
29+
const float spatial_scale,
30+
const int pooled_height,
31+
const int pooled_width,
32+
const int batch_size,
33+
const int channels,
34+
const int height,
35+
const int width,
36+
const int sampling_ratio) {
37+
if (grad.type().is_cuda()) {
38+
#ifdef WITH_CUDA
39+
return ROIAlign_backward_cuda(grad, rois, spatial_scale, pooled_height, pooled_width, batch_size, channels, height, width, sampling_ratio);
40+
#else
41+
AT_ERROR("Not compiled with GPU support");
42+
#endif
43+
}
44+
AT_ERROR("Not implemented on the CPU");
45+
}
46+

CVC-YOLOv3/csrc/ROIPool.h

+48-48
Original file line numberDiff line numberDiff line change
@@ -1,48 +1,48 @@
1-
// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
2-
#pragma once
3-
4-
#include "cpu/vision.h"
5-
6-
#ifdef WITH_CUDA
7-
#include "cuda/vision.h"
8-
#endif
9-
10-
11-
std::tuple<at::Tensor, at::Tensor> ROIPool_forward(const at::Tensor& input,
12-
const at::Tensor& rois,
13-
const float spatial_scale,
14-
const int pooled_height,
15-
const int pooled_width) {
16-
if (input.type().is_cuda()) {
17-
#ifdef WITH_CUDA
18-
return ROIPool_forward_cuda(input, rois, spatial_scale, pooled_height, pooled_width);
19-
#else
20-
AT_ERROR("Not compiled with GPU support");
21-
#endif
22-
}
23-
AT_ERROR("Not implemented on the CPU");
24-
}
25-
26-
at::Tensor ROIPool_backward(const at::Tensor& grad,
27-
const at::Tensor& input,
28-
const at::Tensor& rois,
29-
const at::Tensor& argmax,
30-
const float spatial_scale,
31-
const int pooled_height,
32-
const int pooled_width,
33-
const int batch_size,
34-
const int channels,
35-
const int height,
36-
const int width) {
37-
if (grad.type().is_cuda()) {
38-
#ifdef WITH_CUDA
39-
return ROIPool_backward_cuda(grad, input, rois, argmax, spatial_scale, pooled_height, pooled_width, batch_size, channels, height, width);
40-
#else
41-
AT_ERROR("Not compiled with GPU support");
42-
#endif
43-
}
44-
AT_ERROR("Not implemented on the CPU");
45-
}
46-
47-
48-
1+
// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
2+
#pragma once
3+
4+
#include "cpu/vision.h"
5+
6+
#ifdef WITH_CUDA
7+
#include "cuda/vision.h"
8+
#endif
9+
10+
11+
std::tuple<at::Tensor, at::Tensor> ROIPool_forward(const at::Tensor& input,
12+
const at::Tensor& rois,
13+
const float spatial_scale,
14+
const int pooled_height,
15+
const int pooled_width) {
16+
if (input.type().is_cuda()) {
17+
#ifdef WITH_CUDA
18+
return ROIPool_forward_cuda(input, rois, spatial_scale, pooled_height, pooled_width);
19+
#else
20+
AT_ERROR("Not compiled with GPU support");
21+
#endif
22+
}
23+
AT_ERROR("Not implemented on the CPU");
24+
}
25+
26+
at::Tensor ROIPool_backward(const at::Tensor& grad,
27+
const at::Tensor& input,
28+
const at::Tensor& rois,
29+
const at::Tensor& argmax,
30+
const float spatial_scale,
31+
const int pooled_height,
32+
const int pooled_width,
33+
const int batch_size,
34+
const int channels,
35+
const int height,
36+
const int width) {
37+
if (grad.type().is_cuda()) {
38+
#ifdef WITH_CUDA
39+
return ROIPool_backward_cuda(grad, input, rois, argmax, spatial_scale, pooled_height, pooled_width, batch_size, channels, height, width);
40+
#else
41+
AT_ERROR("Not compiled with GPU support");
42+
#endif
43+
}
44+
AT_ERROR("Not implemented on the CPU");
45+
}
46+
47+
48+

0 commit comments

Comments
 (0)