Skip to content
/ VASC Public

Video Actor Synchrony and Causality - Jupyter notebooks using OpenPose and SciPy to convert and analyses sets of videos as time-series data

License

Notifications You must be signed in to change notification settings

InfantLab/VASC

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

69 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Video Actor Synchrony and Causality Toolkit (VASC)

Created by Caspar Addyman [email protected]

Goldsmiths, University of London, 2022

Version - 0.2

Videos of interacting humans are converted into time series data with OpenPose. The data are processed and then various statistical measures of synchrony and causality between actors are calculated using scipy.

This repository provides Python code and annotated Jupyter notebooks to perform these actions.

  • Step 0: Getting started with this project. What to install (besides these files).
  • Step 1: Process a video (or videos) with OpenPose, creating JSON file per frame with wireframe data for all identified persons ('actors'). Extract video by video, frame by frame data from JSON files and combine into a single numpy array.
  • Step 2: Load nparray from step 1 & perform basic validations (identifying individuals over time, tagging windows of interest, handle missing data).
  • Step 3: Perform fourier analysis to extract rythmic movements and compare across groups.
  • Step 4: Calculate cross-correlations, Granger Causality (and other measures) between multiple actors in same video. Still in development

Installation

To get these scripts working on a new system you need to do the following

Prequisites

First you need to make sure you have supporting software installed.

  1. You need a working Python environment (Python v3.7 or higher) with support for Jupyter notebooks. The easiest way to do this is to install Anaconda.
  2. Install OpenPose
  3. Next you need to download the trained neural-network models that OpenPose uses. To do this go to the models subdirectory of OpenPose directory, and double-click / run the models.bat script.

Installing this code.

You have two options to install this code. Either download the contents of this repository as a zip file to your local machine. Or if you are familiar with GitHub you can fork this repository and keep you own copy of the files with a version history. We recommend using the Github Desktop app to manage this.

Running the code

  1. From your Anaconda folder launch an anaconda command prompt.
  2. Create a new environment for this project from our environment.yml file using the command conda env create -f environment.yml
  3. Switch to your newly created enviroment with command conda activate VASC
  4. Launch from the command line with command juptyer or jupyter lab. Or launch by click the Jupyter icon within Ananconda Navigator (remember to switch enviroment first in the 'Applications on ...' drowdown.)
  5. Open the notebook Step 0.GettingStarted.ipynb and follow the instructions in there.

Python dependencies

The main requirements for this project are found in the environment.yml file in this directory. This can be used to create a new (ana)conda environment like so:

conda create -f environment.yml

Requirements

The main requirements are:

  • numpy, pandas, glob2, opencv
  • pyarrow, xlrd, jupytext
  • ipywidgets, ipycanvas
  • nodejs

(and their dependencies).

DrumTutorial

The folder DrumTutorial will steps you through a small example of using Fourier transforms to extract drumming tempo from a set of short videos of infants and adults drumming. It will be downloaded when you install the contents of this folder.

Video Walkthrough

In the meantime, you can watch this walkthrough video.

Scientific Background

These tools were developed for a scientific project that was aims to see if parents and babies move in synchrony with each other and whether this predicts caring outcomes. The details are found here:

Automated measurement of responsive caregiving at scale using machine learning. Royal Academy of Engineering / Global Challenges Research Fund Overview document

Funding:

This project was supported by the Royal Academy of Engineering Global Challenges Research Fund Grant: Frontiers of Development - Tranche 2 - FoDSF\1920\2\100020

Feedback

If you have any comments or questions, either contact Caspar Addyman [email protected] or submit a issue report here on Github.

About

Video Actor Synchrony and Causality - Jupyter notebooks using OpenPose and SciPy to convert and analyses sets of videos as time-series data

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published