You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+7-8Lines changed: 7 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -45,7 +45,7 @@ You can run iCatcher+ with the command:
45
45
46
46
`icatcher --help`
47
47
48
-
which will list all available options. The description below will help you get more familiar with some common command line arguments.
48
+
Which will list all available options. Below we list some common options to help you get more familiar with iCatcher+. The pipeline is highly configurable, please see [the website](https://icatcherplus.github.io/) for more explanation about the flags.
49
49
50
50
### Annotating a Video
51
51
To produce annotations for a video file (if a folder is provided, all videos will be used for prediction):
@@ -58,31 +58,31 @@ To produce annotations for a video file (if a folder is provided, all videos wil
58
58
59
59
### Common Flags
60
60
61
-
You can save a labeled video by adding:
61
+
-You can save a labeled video by adding:
62
62
63
63
`--output_video_path /path/to/output_folder`
64
64
65
-
If you want to output annotations to a file, use:
65
+
-If you want to output annotations to a file, use:
See [Output format](#output-format) below for more information on how the files are formatted.
70
70
71
-
To show the predictions online in a seperate window, add the option:
71
+
-To show the predictions online in a seperate window, add the option:
72
72
73
73
`--show_output`
74
74
75
-
To launch the iCatcher+ web app (after annotating), use:
75
+
-To launch the iCatcher+ [Web App](#web-app) (after annotating), use:
76
76
77
77
`icatcher --app`
78
78
79
79
The app should open automatically at [http://localhost:5001](http://localhost:5001). For more details, see [Web App](#web-app).
80
80
81
-
Originally a face classifier was used to distinguish between adult and infant faces (however this can result in too much loss of data). It can be turned on by using:
81
+
-Originally a face classifier was used to distinguish between adult and infant faces (however this can result in too much loss of data). It can be turned on by using:
82
82
83
83
`icatcher /path/to/my/video.mp4 --use_fc_model`
84
84
85
-
You can also add parameters to crop the video a given percent before passing to iCatcher:
85
+
-You can also add parameters to crop the video a given percent before passing to iCatcher:
86
86
87
87
`--crop_mode m` where `m` is any of [top, left, right], specifying which side of the video to crop from (if not provided, default is none; if crop_percent is provided but not crop_mode, default is top)
88
88
@@ -97,7 +97,6 @@ Currently we supports 3 output formats, though further formats can be added upon
97
97
-**ui:** needed for viewing results in the web app; produces a directory of the following structure
98
98
99
99
├── decorated_frames # dir containing annotated jpg files for each frame in the video
100
-
├── video.mp4 # the original video
101
100
├── labels.txt # file containing annotations in the `raw_output` format described above
0 commit comments