This project attempts to process a zoom video recordings and output the sentiment and identity of each individual in the video through time
NOTE: the notebooks folder is just the notebooks created during the experimentation process with different model settings.
The following explains how to setup the project locally.
Clone this repo into your system. Open the terminal and show them how to First you have to create a python environment with Python version: 3.9.16. Then you would install the libraries in the requirements.txt file using the folowing command:
- Clone the repo
git clone https://github.com/Sawaiz8/Real-time-facial-sentiment-analysis.git
- Install pip packages
pip install -r requirements.txt
--video_path: send the video path as argument \
--output_path: send the output folder path
--show_video: if set to 1 then it will show the processed video stream. if set to 0 it will save the stream into the video file. default value is 1 Run the following command:
python3 main.py --video_path experimentation_videos/videoplayback.mp4 --output_path model_output --show_video_only 0
Distributed under the MIT License. See LICENSE.txt
for more information.