|
1 |
| -# People-Counter-Application-Using-Intel-OpenVINO-Toolkit |
| 1 | +# Deploy a People Counter App at the Edge |
| 2 | + |
| 3 | +| Details | | |
| 4 | +|-----------------------|---------------| |
| 5 | +| Programming Language: | Python 3.5 or 3.6 | |
| 6 | + |
| 7 | + |
| 8 | + |
| 9 | +## What it Does |
| 10 | + |
| 11 | +The people counter application will demonstrate how to create a smart video IoT solution using Intel® hardware and software tools. The app will detect people in a designated area, providing the number of people in the frame, average duration of people in frame, and total count. |
| 12 | + |
| 13 | +## How it Works |
| 14 | + |
| 15 | +The counter will use the Inference Engine included in the Intel® Distribution of OpenVINO™ Toolkit. The model used should be able to identify people in a video frame. The app should count the number of people in the current frame, the duration that a person is in the frame (time elapsed between entering and exiting a frame) and the total count of people. It then sends the data to a local web server using the Paho MQTT Python package. |
| 16 | + |
| 17 | +You will choose a model to use and convert it with the Model Optimizer. |
| 18 | + |
| 19 | + |
| 20 | + |
| 21 | +## Requirements |
| 22 | + |
| 23 | +### Hardware |
| 24 | + |
| 25 | +* 6th to 10th generation Intel® Core™ processor with Iris® Pro graphics or Intel® HD Graphics. |
| 26 | +* OR use of Intel® Neural Compute Stick 2 (NCS2) |
| 27 | +* OR Udacity classroom workspace for the related course |
| 28 | + |
| 29 | +### Software |
| 30 | + |
| 31 | +* Intel® Distribution of OpenVINO™ toolkit 2019 R3 release |
| 32 | +* Node v6.17.1 |
| 33 | +* Npm v3.10.10 |
| 34 | +* CMake |
| 35 | +* MQTT Mosca server |
| 36 | + |
| 37 | + |
| 38 | +## Setup |
| 39 | + |
| 40 | +### Install Intel® Distribution of OpenVINO™ toolkit |
| 41 | + |
| 42 | +Utilize the classroom workspace, or refer to the relevant instructions for your operating system for this step. |
| 43 | + |
| 44 | +- [Linux/Ubuntu](./linux-setup.md) |
| 45 | +- [Mac](./mac-setup.md) |
| 46 | +- [Windows](./windows-setup.md) |
| 47 | + |
| 48 | +### Install Nodejs and its dependencies |
| 49 | + |
| 50 | +Utilize the classroom workspace, or refer to the relevant instructions for your operating system for this step. |
| 51 | + |
| 52 | +- [Linux/Ubuntu](./linux-setup.md) |
| 53 | +- [Mac](./mac-setup.md) |
| 54 | +- [Windows](./windows-setup.md) |
| 55 | + |
| 56 | +### Install npm |
| 57 | + |
| 58 | +There are three components that need to be running in separate terminals for this application to work: |
| 59 | + |
| 60 | +- MQTT Mosca server |
| 61 | +- Node.js* Web server |
| 62 | +- FFmpeg server |
| 63 | + |
| 64 | +From the main directory: |
| 65 | + |
| 66 | +* For MQTT/Mosca server: |
| 67 | + ``` |
| 68 | + cd webservice/server |
| 69 | + npm install |
| 70 | + ``` |
| 71 | + |
| 72 | +* For Web server: |
| 73 | + ``` |
| 74 | + cd ../ui |
| 75 | + npm install |
| 76 | + ``` |
| 77 | + **Note:** If any configuration errors occur in mosca server or Web server while using **npm install**, use the below commands: |
| 78 | + ``` |
| 79 | + sudo npm install npm -g |
| 80 | + rm -rf node_modules |
| 81 | + npm cache clean |
| 82 | + npm config set registry "http://registry.npmjs.org" |
| 83 | + npm install |
| 84 | + ``` |
| 85 | + |
| 86 | +## What model to use |
| 87 | + |
| 88 | +It is up to you to decide on what model to use for the application. You need to find a model not already converted to Intermediate Representation format (i.e. not one of the Intel® Pre-Trained Models), convert it, and utilize the converted model in your application. |
| 89 | + |
| 90 | +Note that you may need to do additional processing of the output to handle incorrect detections, such as adjusting confidence threshold or accounting for 1-2 frames where the model fails to see a person already counted and would otherwise double count. |
| 91 | + |
| 92 | +**If you are otherwise unable to find a suitable model after attempting and successfully converting at least three other models**, you can document in your write-up what the models were, how you converted them, and why they failed, and then utilize any of the Intel® Pre-Trained Models that may perform better. |
| 93 | + |
| 94 | +## Run the application |
| 95 | + |
| 96 | +From the main directory: |
| 97 | + |
| 98 | +### Step 1 - Start the Mosca server |
| 99 | + |
| 100 | +``` |
| 101 | +cd webservice/server/node-server |
| 102 | +node ./server.js |
| 103 | +``` |
| 104 | + |
| 105 | +You should see the following message, if successful: |
| 106 | +``` |
| 107 | +Mosca server started. |
| 108 | +``` |
| 109 | + |
| 110 | +### Step 2 - Start the GUI |
| 111 | + |
| 112 | +Open new terminal and run below commands. |
| 113 | +``` |
| 114 | +cd webservice/ui |
| 115 | +npm run dev |
| 116 | +``` |
| 117 | + |
| 118 | +You should see the following message in the terminal. |
| 119 | +``` |
| 120 | +webpack: Compiled successfully |
| 121 | +``` |
| 122 | + |
| 123 | +### Step 3 - FFmpeg Server |
| 124 | + |
| 125 | +Open new terminal and run the below commands. |
| 126 | +``` |
| 127 | +sudo ffserver -f ./ffmpeg/server.conf |
| 128 | +``` |
| 129 | + |
| 130 | +### Step 4 - Run the code |
| 131 | + |
| 132 | +Open a new terminal to run the code. |
| 133 | + |
| 134 | +#### Setup the environment |
| 135 | + |
| 136 | +You must configure the environment to use the Intel® Distribution of OpenVINO™ toolkit one time per session by running the following command: |
| 137 | +``` |
| 138 | +source /opt/intel/openvino/bin/setupvars.sh -pyver 3.5 |
| 139 | +``` |
| 140 | + |
| 141 | +You should also be able to run the application with Python 3.6, although newer versions of Python will not work with the app. |
| 142 | + |
| 143 | +#### Running on the CPU |
| 144 | + |
| 145 | +When running Intel® Distribution of OpenVINO™ toolkit Python applications on the CPU, the CPU extension library is required. This can be found at: |
| 146 | + |
| 147 | +``` |
| 148 | +/opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/ |
| 149 | +``` |
| 150 | + |
| 151 | +*Depending on whether you are using Linux or Mac, the filename will be either `libcpu_extension_sse4.so` or `libcpu_extension.dylib`, respectively.* (The Linux filename may be different if you are using a AVX architecture) |
| 152 | + |
| 153 | +Though by default application runs on CPU, this can also be explicitly specified by ```-d CPU``` command-line argument: |
| 154 | + |
| 155 | +``` |
| 156 | +python main.py -i resources/Pedestrian_Detect_2_1_1.mp4 -m your-model.xml -l /opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/libcpu_extension_sse4.so -d CPU -pt 0.6 | ffmpeg -v warning -f rawvideo -pixel_format bgr24 -video_size 768x432 -framerate 24 -i - http://0.0.0.0:3004/fac.ffm |
| 157 | +``` |
| 158 | +If you are in the classroom workspace, use the “Open App” button to view the output. If working locally, to see the output on a web based interface, open the link [http://0.0.0.0:3004](http://0.0.0.0:3004/) in a browser. |
| 159 | + |
| 160 | +#### Running on the Intel® Neural Compute Stick |
| 161 | + |
| 162 | +To run on the Intel® Neural Compute Stick, use the ```-d MYRIAD``` command-line argument: |
| 163 | + |
| 164 | +``` |
| 165 | +python3.5 main.py -d MYRIAD -i resources/Pedestrian_Detect_2_1_1.mp4 -m your-model.xml -pt 0.6 | ffmpeg -v warning -f rawvideo -pixel_format bgr24 -video_size 768x432 -framerate 24 -i - http://0.0.0.0:3004/fac.ffm |
| 166 | +``` |
| 167 | + |
| 168 | +To see the output on a web based interface, open the link [http://0.0.0.0:3004](http://0.0.0.0:3004/) in a browser. |
| 169 | + |
| 170 | +**Note:** The Intel® Neural Compute Stick can only run FP16 models at this time. The model that is passed to the application, through the `-m <path_to_model>` command-line argument, must be of data type FP16. |
| 171 | + |
| 172 | +#### Using a camera stream instead of a video file |
| 173 | + |
| 174 | +To get the input video from the camera, use the `-i CAM` command-line argument. Specify the resolution of the camera using the `-video_size` command line argument. |
| 175 | + |
| 176 | +For example: |
| 177 | +``` |
| 178 | +python main.py -i CAM -m your-model.xml -l /opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/libcpu_extension_sse4.so -d CPU -pt 0.6 | ffmpeg -v warning -f rawvideo -pixel_format bgr24 -video_size 768x432 -framerate 24 -i - http://0.0.0.0:3004/fac.ffm |
| 179 | +``` |
| 180 | + |
| 181 | +To see the output on a web based interface, open the link [http://0.0.0.0:3004](http://0.0.0.0:3004/) in a browser. |
| 182 | + |
| 183 | +**Note:** |
| 184 | +User has to give `-video_size` command line argument according to the input as it is used to specify the resolution of the video or image file. |
| 185 | + |
| 186 | +## A Note on Running Locally |
| 187 | + |
| 188 | +The servers herein are configured to utilize the Udacity classroom workspace. As such, |
| 189 | +to run on your local machine, you will need to change the below file: |
| 190 | + |
| 191 | +``` |
| 192 | +webservice/ui/src/constants/constants.js |
| 193 | +``` |
| 194 | + |
| 195 | +The `CAMERA_FEED_SERVER` and `MQTT_SERVER` both use the workspace configuration. |
| 196 | +You can change each of these as follows: |
| 197 | + |
| 198 | +``` |
| 199 | +CAMERA_FEED_SERVER: "http://localhost:3004" |
| 200 | +... |
| 201 | +MQTT_SERVER: "ws://localhost:3002" |
| 202 | +``` |
0 commit comments