Skip to content

Commit d141777

Browse files
committedSep 27, 2024
Minor bug fixes and update to setup for python version.
1 parent 542d368 commit d141777

File tree

4 files changed

+46
-27
lines changed

4 files changed

+46
-27
lines changed
 

‎README_pypi.rst

+41-17
Original file line numberDiff line numberDiff line change
@@ -9,15 +9,15 @@ Libreface
99
|badge1| |badge2|
1010

1111

12-
.. |badge1| image:: https://img.shields.io/badge/version-0.0.17-blue
12+
.. |badge1| image:: https://img.shields.io/badge/version-0.0.19-blue
1313
:alt: Static Badge
1414

1515

1616
.. |badge2| image:: https://img.shields.io/badge/python-%3D%3D3.8-green
1717
:alt: Static Badge
1818

1919

20-
This is the python package for `LibreFace: An Open-Source Toolkit for Deep Facial Expression Analysis`_.
20+
This is the Python package for `LibreFace: An Open-Source Toolkit for Deep Facial Expression Analysis`_.
2121
LibreFace is an open-source and comprehensive toolkit for accurate and real-time facial expression analysis with both CPU and GPU acceleration versions.
2222
LibreFace eliminates the gap between cutting-edge research and an easy and free-to-use non-commercial toolbox. We propose to adaptively pre-train the vision encoders with various face datasets and then distill them to a lightweight ResNet-18 model in a feature-wise matching manner.
2323
We conduct extensive experiments of pre-training and distillation to demonstrate that our proposed pipeline achieves comparable results to state-of-the-art works while maintaining real-time efficiency.
@@ -31,7 +31,7 @@ Dependencies
3131

3232
- Python==3.8
3333
- You should have `cmake` installed in your system.
34-
- **For Linux users** - :code:`sudo apt-get install cmake`. If you run into troubles, consider upgrading to the latest version (`instructions`_).
34+
- **For Linux users** - :code:`sudo apt-get install cmake`. If you run into trouble, consider upgrading to the latest version (`instructions`_).
3535
- **For Mac users** - :code:`brew install cmake`.
3636

3737
.. _`instructions`: https://askubuntu.com/questions/355565/how-do-i-install-the-latest-version-of-cmake-from-the-command-line
@@ -43,7 +43,7 @@ You can install this package using `pip` from the testPyPI hub:
4343

4444
.. code-block:: bash
4545
46-
python -m pip install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple libreface==0.0.17
46+
python -m pip install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple libreface==0.0.19
4747
4848
4949
Usage
@@ -52,56 +52,64 @@ Usage
5252
Commandline
5353
----------------
5454

55-
You can use this package through commandline using the following command.
55+
You can use this package through the command line using the following command.
5656

5757
.. code-block:: bash
5858
59-
libreface --input_path="path/to/your_image_or_video.jpg"
59+
libreface --input_path="path/to/your_image_or_video"
6060
61-
Note that the above script would save results in a CSV at the default location - :code:`sample_results.csv`. If you want to specify your own path, use the :code:`--output_path` commandline argument,
61+
Note that the above script would save results in a CSV at the default location - :code:`sample_results.csv`. If you want to specify your own path, use the :code:`--output_path` command line argument,
6262

6363
.. code-block:: bash
6464
65-
libreface --input_path="path/to/your_image_or_video.jpg" --output_path="path/to/save_results.csv"
65+
libreface --input_path="path/to/your_image_or_video" --output_path="path/to/save_results.csv"
6666
67-
To change the device used for inference, use the :code:`--device` commandline argument,
67+
To change the device used for inference, use the :code:`--device` command line argument,
6868

6969
.. code-block:: bash
7070
71-
libreface --input_path="path/to/your_image_or_video.jpg" --device="cuda:0"
71+
libreface --input_path="path/to/your_image_or_video" --device="cuda:0"
7272
7373
To save intermediate files, :code:`libreface` uses a temporary directory that defaults to ./tmp. To change the temporary directory path,
7474

7575
.. code-block:: bash
7676
77-
libreface --input_path="path/to/your_image_or_video.jpg" --temp="your/temp/path"
77+
libreface --input_path="path/to/your_image_or_video" --temp="your/temp/path"
78+
79+
For video inference, our code processes the frames of your video in batches. You can specify the batch size and the number of workers for data loading as follows,
80+
81+
.. code-block:: bash
82+
83+
libreface --input_path="path/to/your_video" --batch_size=256 --num_workers=2 --device="cuda:0"
84+
85+
Note that by default, the :code:`--batch_size` argument is 256, and :code:`--num_workers` argument is 2. You can increase or decrease these values according to your machine's capacity.
7886

7987
**Examples**
8088

81-
Download a `sample image`_ from our github repository. To get the facial attributes for this image and save to a CSV file simply run,
89+
Download a `sample image`_ from our GitHub repository. To get the facial attributes for this image and save to a CSV file, simply run,
8290

8391
.. _`sample image`: https://github.com/ihp-lab/LibreFace/blob/pypi_wrap/sample_disfa.png
8492

8593
.. code-block:: bash
8694
8795
libreface --input_path="sample_disfa.png"
8896
89-
Download a `sample video`_ from our github repository. To run the inference on this video using a GPU and save the results to :code:`my_custom_file.csv` run the following command,
97+
Download a `sample video`_ from our GitHub repository. To run the inference on this video using a GPU and save the results to :code:`my_custom_file.csv` run the following command,
9098

9199
.. _`sample video`: https://github.com/ihp-lab/LibreFace/blob/pypi_wrap/sample_disfa.avi
92100

93101
.. code-block:: bash
94102
95103
libreface --input_path="sample_disfa.avi" --output_path="my_custom_file.csv" --device="cuda:0"
96104
97-
Note that for videos, each row in the saved csv file correspond to individual frames in the given video.
105+
Note that for videos, each row in the saved CSV file corresponds to individual frames in the given video.
98106

99107
Python API
100108
--------------
101109

102-
Here’s how to use this package in your python scripts.
110+
Here’s how to use this package in your Python scripts.
103111

104-
To assign the results to a python variable,
112+
To assign the results to a Python variable,
105113

106114
.. code-block:: python
107115
@@ -130,7 +138,18 @@ To save intermediate files, libreface uses a temporary directory that defaults t
130138
131139
import libreface
132140
libreface.get_facial_attributes(image_or_video_path,
133-
temp_dir = "your/temp/path")
141+
temp_dir = "your/temp/path")
142+
143+
For video inference, our code processes the frames of your video in batches. You can specify the batch size and the number of workers for data loading as follows,
144+
145+
.. code-block:: python
146+
147+
import libreface
148+
libreface.get_facial_attributes(video_path,
149+
batch_size = 256,
150+
num_workers = 2)
151+
152+
Note that by default, the :code:`batch_size` is 256, and :code:`num_workers` is 2. You can increase or decrease these values according to your machine's capacity.
134153

135154
Downloading Model Weights
136155
================================
@@ -162,6 +181,11 @@ For an image processed through LibreFace, we save the following information in t
162181

163182
For a video, we save the same features for each frame in the video at index :code:`frame_idx` and timestamp :code:`frame_time_in_ms`.
164183

184+
Inference Speed
185+
====================
186+
187+
LibreFace is able to process long-form videos at :code:`~30 FPS`, on a machine that has a :code:`13th Gen Intel Core i9-13900K` CPU and a :code:`NVIDIA GeForce RTX 3080` GPU. Please note that the default code runs on CPU and you have to use the :code:`device` parameter for Python or the :code:`--device` command line option to specify your GPU device ("cuda:0", "cuda:1", ...).
188+
165189
Contributing
166190
============
167191

‎libreface/__init__.py

+1-8
Original file line numberDiff line numberDiff line change
@@ -61,23 +61,19 @@ def get_facial_attributes_video(video_path,
6161
num_workers = 2,
6262
weights_download_dir:str = "./weights_libreface"):
6363
print(f"Using device: {device} for inference...")
64-
frame_extraction_start = time.time()
64+
6565
frames_df = get_frames_from_video_ffmpeg(video_path, temp_dir=temp_dir)
6666
cur_video_name = ".".join(video_path.split("/")[-1].split(".")[:-1])
6767
aligned_frames_path_list, headpose_list, landmarks_3d_list = get_aligned_video_frames(frames_df, temp_dir=os.path.join(temp_dir, cur_video_name))
6868
# frames_df["aligned_frame_path"] = aligned_frames_path_list
6969
frames_df = frames_df.drop("path_to_frame", axis=1)
7070
frames_df["headpose"] = headpose_list
7171
frames_df["landmarks_3d"] = landmarks_3d_list
72-
frame_extraction_end = time.time()
73-
frame_extraction_fps = len(frames_df.index) / (frame_extraction_end - frame_extraction_start)
74-
print(f"Frame extraction took a total of {(frame_extraction_end - frame_extraction_start):.3f} seconds - {frame_extraction_fps:.2f} FPS")
7572

7673

7774
frames_df = frames_df.join(pd.json_normalize(frames_df['headpose'])).drop('headpose', axis='columns')
7875
frames_df = frames_df.join(pd.json_normalize(frames_df['landmarks_3d'])).drop('landmarks_3d', axis='columns')
7976

80-
fac_attr_start = time.time()
8177
detected_aus, au_intensities, facial_expression = [], [], []
8278

8379
if model_choice == "joint_au_detection_intensity_estimator":
@@ -106,9 +102,6 @@ def get_facial_attributes_video(video_path,
106102
batch_size=batch_size,
107103
weights_download_dir=weights_download_dir)
108104

109-
fac_attr_end = time.time()
110-
fac_attr__fps = len(frames_df.index) / (fac_attr_end - fac_attr_start)
111-
print(f"Detecting facial attributes took a total of {(fac_attr_end - fac_attr_start):.3f} seconds - {fac_attr__fps:.2f} FPS")
112105

113106
frames_df = frames_df.join(detected_aus)
114107
frames_df = frames_df.join(au_intensities)

‎libreface/commandline.py

+2
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@ def main_func():
99
parser.add_argument("--output_path", type=str, default="sample_results.csv", help="Path to the csv where results should be saved. Defaults to 'sample_results.csv'")
1010
parser.add_argument("--device", type=str, default="cpu", help="Device to use while inference. Can be 'cpu', 'cuda:0', 'cuda:1', ... Defaults to 'cpu'")
1111
parser.add_argument("--temp", type=str, default="./tmp", help="Path where the temporary results for facial attributes can be saved.")
12+
parser.add_argument("--batch_size", type=int, default=256, help="Number of frames to process in a single batch when doing inference on a video.")
1213
parser.add_argument("--num_workers", type=int, default=2, help="Number of workers to be used in the dataloader while doing inference on a video.")
1314

1415
args = parser.parse_args()
@@ -18,6 +19,7 @@ def main_func():
1819
model_choice="joint_au_detection_intensity_estimator",
1920
temp_dir=args.temp,
2021
device=args.device,
22+
batch_size=args.batch_size,
2123
num_workers=args.num_workers)
2224

2325
if __name__ == "__main__":

‎setup.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
URL = 'https://boese0601.github.io/libreface'
1515
EMAIL = 'achaubey@usc.edu'
1616
AUTHOR = 'IHP-Lab'
17-
REQUIRES_PYTHON = '==3.8'
17+
REQUIRES_PYTHON = '>=3.8'
1818

1919

2020
# What packages are required for this module to be executed?
@@ -52,7 +52,7 @@ def list_reqs(fname='requirements_new.txt'):
5252
# with open(PACKAGE_DIR / 'VERSION') as f:
5353
# _version = f.read().strip()
5454

55-
about['__version__'] = "0.0.17"
55+
about['__version__'] = "0.0.19"
5656

5757

5858
# Where the magic happens:

0 commit comments

Comments
 (0)
Please sign in to comment.