Skip to content

Commit 07546a0

Browse files
authored
Update README.md
1 parent 4c06e8e commit 07546a0

File tree

1 file changed

+7
-8
lines changed

1 file changed

+7
-8
lines changed

README.md

Lines changed: 7 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ You can run iCatcher+ with the command:
4545

4646
`icatcher --help`
4747

48-
which will list all available options. The description below will help you get more familiar with some common command line arguments.
48+
Which will list all available options. Below we list some common options to help you get more familiar with iCatcher+. The pipeline is highly configurable, please see [the website](https://icatcherplus.github.io/) for more explanation about the flags.
4949

5050
### Annotating a Video
5151
To produce annotations for a video file (if a folder is provided, all videos will be used for prediction):
@@ -58,31 +58,31 @@ To produce annotations for a video file (if a folder is provided, all videos wil
5858

5959
### Common Flags
6060

61-
You can save a labeled video by adding:
61+
- You can save a labeled video by adding:
6262

6363
`--output_video_path /path/to/output_folder`
6464

65-
If you want to output annotations to a file, use:
65+
- If you want to output annotations to a file, use:
6666

6767
`--output_annotation /path/to/output_annotation_folder`
6868

6969
See [Output format](#output-format) below for more information on how the files are formatted.
7070

71-
To show the predictions online in a seperate window, add the option:
71+
- To show the predictions online in a seperate window, add the option:
7272

7373
`--show_output`
7474

75-
To launch the iCatcher+ web app (after annotating), use:
75+
- To launch the iCatcher+ [Web App](#web-app) (after annotating), use:
7676

7777
`icatcher --app`
7878

7979
The app should open automatically at [http://localhost:5001](http://localhost:5001). For more details, see [Web App](#web-app).
8080

81-
Originally a face classifier was used to distinguish between adult and infant faces (however this can result in too much loss of data). It can be turned on by using:
81+
- Originally a face classifier was used to distinguish between adult and infant faces (however this can result in too much loss of data). It can be turned on by using:
8282

8383
`icatcher /path/to/my/video.mp4 --use_fc_model`
8484

85-
You can also add parameters to crop the video a given percent before passing to iCatcher:
85+
- You can also add parameters to crop the video a given percent before passing to iCatcher:
8686

8787
`--crop_mode m` where `m` is any of [top, left, right], specifying which side of the video to crop from (if not provided, default is none; if crop_percent is provided but not crop_mode, default is top)
8888

@@ -97,7 +97,6 @@ Currently we supports 3 output formats, though further formats can be added upon
9797
- **ui:** needed for viewing results in the web app; produces a directory of the following structure
9898

9999
├── decorated_frames # dir containing annotated jpg files for each frame in the video
100-
├── video.mp4 # the original video
101100
├── labels.txt # file containing annotations in the `raw_output` format described above
102101

103102
# Web App

0 commit comments

Comments
 (0)