Skip to content

Commit c770517

Browse files
initial with npm install
1 parent 71af2bd commit c770517

File tree

86 files changed

+24237
-1
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

86 files changed

+24237
-1
lines changed

README.md

+202-1
Original file line numberDiff line numberDiff line change
@@ -1 +1,202 @@
1-
# People-Counter-Application-Using-Intel-OpenVINO-Toolkit
1+
# Deploy a People Counter App at the Edge
2+
3+
| Details | |
4+
|-----------------------|---------------|
5+
| Programming Language: | Python 3.5 or 3.6 |
6+
7+
![people-counter-python](./images/people-counter-image.png)
8+
9+
## What it Does
10+
11+
The people counter application will demonstrate how to create a smart video IoT solution using Intel® hardware and software tools. The app will detect people in a designated area, providing the number of people in the frame, average duration of people in frame, and total count.
12+
13+
## How it Works
14+
15+
The counter will use the Inference Engine included in the Intel® Distribution of OpenVINO™ Toolkit. The model used should be able to identify people in a video frame. The app should count the number of people in the current frame, the duration that a person is in the frame (time elapsed between entering and exiting a frame) and the total count of people. It then sends the data to a local web server using the Paho MQTT Python package.
16+
17+
You will choose a model to use and convert it with the Model Optimizer.
18+
19+
![architectural diagram](./images/arch_diagram.png)
20+
21+
## Requirements
22+
23+
### Hardware
24+
25+
* 6th to 10th generation Intel® Core™ processor with Iris® Pro graphics or Intel® HD Graphics.
26+
* OR use of Intel® Neural Compute Stick 2 (NCS2)
27+
* OR Udacity classroom workspace for the related course
28+
29+
### Software
30+
31+
* Intel® Distribution of OpenVINO™ toolkit 2019 R3 release
32+
* Node v6.17.1
33+
* Npm v3.10.10
34+
* CMake
35+
* MQTT Mosca server
36+
37+
38+
## Setup
39+
40+
### Install Intel® Distribution of OpenVINO™ toolkit
41+
42+
Utilize the classroom workspace, or refer to the relevant instructions for your operating system for this step.
43+
44+
- [Linux/Ubuntu](./linux-setup.md)
45+
- [Mac](./mac-setup.md)
46+
- [Windows](./windows-setup.md)
47+
48+
### Install Nodejs and its dependencies
49+
50+
Utilize the classroom workspace, or refer to the relevant instructions for your operating system for this step.
51+
52+
- [Linux/Ubuntu](./linux-setup.md)
53+
- [Mac](./mac-setup.md)
54+
- [Windows](./windows-setup.md)
55+
56+
### Install npm
57+
58+
There are three components that need to be running in separate terminals for this application to work:
59+
60+
- MQTT Mosca server
61+
- Node.js* Web server
62+
- FFmpeg server
63+
64+
From the main directory:
65+
66+
* For MQTT/Mosca server:
67+
```
68+
cd webservice/server
69+
npm install
70+
```
71+
72+
* For Web server:
73+
```
74+
cd ../ui
75+
npm install
76+
```
77+
**Note:** If any configuration errors occur in mosca server or Web server while using **npm install**, use the below commands:
78+
```
79+
sudo npm install npm -g
80+
rm -rf node_modules
81+
npm cache clean
82+
npm config set registry "http://registry.npmjs.org"
83+
npm install
84+
```
85+
86+
## What model to use
87+
88+
It is up to you to decide on what model to use for the application. You need to find a model not already converted to Intermediate Representation format (i.e. not one of the Intel® Pre-Trained Models), convert it, and utilize the converted model in your application.
89+
90+
Note that you may need to do additional processing of the output to handle incorrect detections, such as adjusting confidence threshold or accounting for 1-2 frames where the model fails to see a person already counted and would otherwise double count.
91+
92+
**If you are otherwise unable to find a suitable model after attempting and successfully converting at least three other models**, you can document in your write-up what the models were, how you converted them, and why they failed, and then utilize any of the Intel® Pre-Trained Models that may perform better.
93+
94+
## Run the application
95+
96+
From the main directory:
97+
98+
### Step 1 - Start the Mosca server
99+
100+
```
101+
cd webservice/server/node-server
102+
node ./server.js
103+
```
104+
105+
You should see the following message, if successful:
106+
```
107+
Mosca server started.
108+
```
109+
110+
### Step 2 - Start the GUI
111+
112+
Open new terminal and run below commands.
113+
```
114+
cd webservice/ui
115+
npm run dev
116+
```
117+
118+
You should see the following message in the terminal.
119+
```
120+
webpack: Compiled successfully
121+
```
122+
123+
### Step 3 - FFmpeg Server
124+
125+
Open new terminal and run the below commands.
126+
```
127+
sudo ffserver -f ./ffmpeg/server.conf
128+
```
129+
130+
### Step 4 - Run the code
131+
132+
Open a new terminal to run the code.
133+
134+
#### Setup the environment
135+
136+
You must configure the environment to use the Intel® Distribution of OpenVINO™ toolkit one time per session by running the following command:
137+
```
138+
source /opt/intel/openvino/bin/setupvars.sh -pyver 3.5
139+
```
140+
141+
You should also be able to run the application with Python 3.6, although newer versions of Python will not work with the app.
142+
143+
#### Running on the CPU
144+
145+
When running Intel® Distribution of OpenVINO™ toolkit Python applications on the CPU, the CPU extension library is required. This can be found at:
146+
147+
```
148+
/opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/
149+
```
150+
151+
*Depending on whether you are using Linux or Mac, the filename will be either `libcpu_extension_sse4.so` or `libcpu_extension.dylib`, respectively.* (The Linux filename may be different if you are using a AVX architecture)
152+
153+
Though by default application runs on CPU, this can also be explicitly specified by ```-d CPU``` command-line argument:
154+
155+
```
156+
python main.py -i resources/Pedestrian_Detect_2_1_1.mp4 -m your-model.xml -l /opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/libcpu_extension_sse4.so -d CPU -pt 0.6 | ffmpeg -v warning -f rawvideo -pixel_format bgr24 -video_size 768x432 -framerate 24 -i - http://0.0.0.0:3004/fac.ffm
157+
```
158+
If you are in the classroom workspace, use the “Open App” button to view the output. If working locally, to see the output on a web based interface, open the link [http://0.0.0.0:3004](http://0.0.0.0:3004/) in a browser.
159+
160+
#### Running on the Intel® Neural Compute Stick
161+
162+
To run on the Intel® Neural Compute Stick, use the ```-d MYRIAD``` command-line argument:
163+
164+
```
165+
python3.5 main.py -d MYRIAD -i resources/Pedestrian_Detect_2_1_1.mp4 -m your-model.xml -pt 0.6 | ffmpeg -v warning -f rawvideo -pixel_format bgr24 -video_size 768x432 -framerate 24 -i - http://0.0.0.0:3004/fac.ffm
166+
```
167+
168+
To see the output on a web based interface, open the link [http://0.0.0.0:3004](http://0.0.0.0:3004/) in a browser.
169+
170+
**Note:** The Intel® Neural Compute Stick can only run FP16 models at this time. The model that is passed to the application, through the `-m <path_to_model>` command-line argument, must be of data type FP16.
171+
172+
#### Using a camera stream instead of a video file
173+
174+
To get the input video from the camera, use the `-i CAM` command-line argument. Specify the resolution of the camera using the `-video_size` command line argument.
175+
176+
For example:
177+
```
178+
python main.py -i CAM -m your-model.xml -l /opt/intel/openvino/deployment_tools/inference_engine/lib/intel64/libcpu_extension_sse4.so -d CPU -pt 0.6 | ffmpeg -v warning -f rawvideo -pixel_format bgr24 -video_size 768x432 -framerate 24 -i - http://0.0.0.0:3004/fac.ffm
179+
```
180+
181+
To see the output on a web based interface, open the link [http://0.0.0.0:3004](http://0.0.0.0:3004/) in a browser.
182+
183+
**Note:**
184+
User has to give `-video_size` command line argument according to the input as it is used to specify the resolution of the video or image file.
185+
186+
## A Note on Running Locally
187+
188+
The servers herein are configured to utilize the Udacity classroom workspace. As such,
189+
to run on your local machine, you will need to change the below file:
190+
191+
```
192+
webservice/ui/src/constants/constants.js
193+
```
194+
195+
The `CAMERA_FEED_SERVER` and `MQTT_SERVER` both use the workspace configuration.
196+
You can change each of these as follows:
197+
198+
```
199+
CAMERA_FEED_SERVER: "http://localhost:3004"
200+
...
201+
MQTT_SERVER: "ws://localhost:3002"
202+
```

WRITEUP.md

+59
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
# Project Write-Up
2+
3+
You can use this document as a template for providing your project write-up. However, if you
4+
have a different format you prefer, feel free to use it as long as you answer all required
5+
questions.
6+
7+
## Explaining Custom Layers
8+
9+
The process behind converting custom layers involves...
10+
11+
Some of the potential reasons for handling custom layers are...
12+
13+
## Comparing Model Performance
14+
15+
My method(s) to compare models before and after conversion to Intermediate Representations
16+
were...
17+
18+
The difference between model accuracy pre- and post-conversion was...
19+
20+
The size of the model pre- and post-conversion was...
21+
22+
The inference time of the model pre- and post-conversion was...
23+
24+
## Assess Model Use Cases
25+
26+
Some of the potential use cases of the people counter app are...
27+
28+
Each of these use cases would be useful because...
29+
30+
## Assess Effects on End User Needs
31+
32+
Lighting, model accuracy, and camera focal length/image size have different effects on a
33+
deployed edge model. The potential effects of each of these are as follows...
34+
35+
## Model Research
36+
37+
[This heading is only required if a suitable model was not found after trying out at least three
38+
different models. However, you may also use this heading to detail how you converted
39+
a successful model.]
40+
41+
In investigating potential people counter models, I tried each of the following three models:
42+
43+
- Model 1: [Name]
44+
- [Model Source]
45+
- I converted the model to an Intermediate Representation with the following arguments...
46+
- The model was insufficient for the app because...
47+
- I tried to improve the model for the app by...
48+
49+
- Model 2: [Name]
50+
- [Model Source]
51+
- I converted the model to an Intermediate Representation with the following arguments...
52+
- The model was insufficient for the app because...
53+
- I tried to improve the model for the app by...
54+
55+
- Model 3: [Name]
56+
- [Model Source]
57+
- I converted the model to an Intermediate Representation with the following arguments...
58+
- The model was insufficient for the app because...
59+
- I tried to improve the model for the app by...

fac.ffm

4 KB
Binary file not shown.

ffmpeg/server.conf

+46
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
# Server
2+
HTTPPort 3004
3+
HTTPBindAddress 0.0.0.0
4+
MaxHTTPConnections 200
5+
MaxClients 100
6+
MaxBandwidth 54000
7+
CustomLog -
8+
9+
# Feed/Raw video
10+
<Feed fac.ffm>
11+
File fac.ffm
12+
FileMaxSize 16M
13+
ACL allow 127.0.0.1
14+
</Feed>
15+
16+
# Stream
17+
<Stream facstream.mjpeg>
18+
Feed fac.ffm
19+
Format mpjpeg
20+
VideoBitRate 8192
21+
VideoBufferSize 8192
22+
VideoFrameRate 25
23+
VideoSize hd480
24+
#VideoQMin 2
25+
#VideoQMax 8
26+
NoAudio
27+
Strict -1
28+
29+
ACL allow 192.168.0.0 192.168.255.255
30+
ACL allow localhost
31+
ACL allow 127.0.0.1
32+
</Stream>
33+
34+
# Special streams
35+
# Server status
36+
<Stream stat.html>
37+
Format status
38+
ACL allow localhost
39+
ACL allow 127.0.0.1
40+
ACL allow 192.168.0.0 192.168.255.255
41+
</Stream>
42+
43+
# Redirect index.html to the appropriate site
44+
<Redirect index.html>
45+
URL http://www.github.com/intel-iot-devkit
46+
</Redirect>

images/arch_diagram.png

300 KB
Loading

images/jupy1.png

49.9 KB
Loading

images/jupy2.png

89.1 KB
Loading

images/people-counter-image.png

224 KB
Loading

inference.py

+67
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,67 @@
1+
#!/usr/bin/env python3
2+
"""
3+
Copyright (c) 2018 Intel Corporation.
4+
5+
Permission is hereby granted, free of charge, to any person obtaining
6+
a copy of this software and associated documentation files (the
7+
"Software"), to deal in the Software without restriction, including
8+
without limitation the rights to use, copy, modify, merge, publish,
9+
distribute, sublicense, and/or sell copies of the Software, and to
10+
permit persons to whom the Software is furnished to do so, subject to
11+
the following conditions:
12+
13+
The above copyright notice and this permission notice shall be
14+
included in all copies or substantial portions of the Software.
15+
16+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
17+
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
18+
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
19+
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
20+
LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
21+
OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
22+
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
23+
"""
24+
25+
import os
26+
import sys
27+
import logging as log
28+
from openvino.inference_engine import IENetwork, IECore
29+
30+
31+
class Network:
32+
"""
33+
Load and configure inference plugins for the specified target devices
34+
and performs synchronous and asynchronous modes for the specified infer requests.
35+
"""
36+
37+
def __init__(self):
38+
### TODO: Initialize any class variables desired ###
39+
40+
def load_model(self):
41+
### TODO: Load the model ###
42+
### TODO: Check for supported layers ###
43+
### TODO: Add any necessary extensions ###
44+
### TODO: Return the loaded inference plugin ###
45+
### Note: You may need to update the function parameters. ###
46+
return
47+
48+
def get_input_shape(self):
49+
### TODO: Return the shape of the input layer ###
50+
return
51+
52+
def exec_net(self):
53+
### TODO: Start an asynchronous request ###
54+
### TODO: Return any necessary information ###
55+
### Note: You may need to update the function parameters. ###
56+
return
57+
58+
def wait(self):
59+
### TODO: Wait for the request to be complete. ###
60+
### TODO: Return any necessary information ###
61+
### Note: You may need to update the function parameters. ###
62+
return
63+
64+
def get_output(self):
65+
### TODO: Extract and return the output results
66+
### Note: You may need to update the function parameters. ###
67+
return

0 commit comments

Comments
 (0)