Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .codespellrc
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
[codespell]
# Ref: https://github.com/codespell-project/codespell#using-a-config-file
skip = .git*,.codespellrc,FSharp.Core.xml
check-hidden = true
# ignore-regex =
ignore-words-list = nd,ans
25 changes: 25 additions & 0 deletions .github/workflows/codespell.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Codespell configuration is within .codespellrc
---
name: Codespell

on:
push:
branches: [master]
pull_request:
branches: [master]

permissions:
contents: read

jobs:
codespell:
name: Check for spelling errors
runs-on: ubuntu-latest

steps:
- name: Checkout
uses: actions/checkout@v4
- name: Annotate locations with typos
uses: codespell-project/codespell-problem-matcher@v1
- name: Codespell
uses: codespell-project/actions-codespell@v2
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ cd dannce
Then you should be ready to try the quickstart demo! \
These installation steps were tested with Anaconda releases 4.7.12 and 2020.02, although we expect it to work for most conda installations. After installing Anaconda, and presuming there are no issues with GPU drivers, the installation should take less than 5 minutes.

A note on the PyTorch requirment.
A note on the PyTorch requirement.
PyTorch is not required, but 3D volume generation is significantly faster when using PyTorch than with TensorFlow or NumPy. To use TensorFlow only, without having to install the PyTorch package, simply toggle the `predict_mode` field in the DANNCE configuration files to `tf`. To use NumPy volume generation (slowest), change `predict_mode` to `None`.


Expand Down Expand Up @@ -149,7 +149,7 @@ DANNCE requires a parent video directory with *n* sub-directories, one for each
`DANNCE` uses two configuration files and one data file.

- *main config*, e.g. `configs/*.yaml`. This file defines data and model hyperparameters. It can be reused across experiments.
- *io config*, e.g. `demo/markerless_mouse_1/io.yaml`. This file defines input data and ouput directories. It is used for a single experiment.
- *io config*, e.g. `demo/markerless_mouse_1/io.yaml`. This file defines input data and output directories. It is used for a single experiment.
- *dannce.mat*, e.g. `demo/markerless_mouse_1/label3d_dannce.mat`. This file contains three cell arrays of matlab structures. `params` stores the camera parameters for each camera. `sync` stores a vector that synchronizes all cameras. `labelData` stores the frame identities and 3d labels for hand-labeled frames. This file can be produced automatically with `Label3D.exportDannce()`.

**camera calibration parameters**.
Expand All @@ -160,7 +160,7 @@ A properly formatted calibration struct has the following fields, `['R','t','K',
**synchronization files**.
DANNCE requires a set of sync structs, one for each camera, which define frame synchrony across the different cameras over time. If you know your cameras are reliably synchronized at all times (e.g. via hardware triggering), these files can be generated with the aid of `dannce/utils/makeSyncFiles.py`. Once your video directories are set up correctly, sync files can get generated by running `python dannce/utils/makeSyncFiles.py {path_to_videos} {acquisition_frame_rate} {number_tracked_landmarks}`, where {.} denotes variables you must replace with relevant values. See the `makeSyncFiles.py` docstring for more information.

If your cameras are not natively synchronized, but you can collect timestaps for each frame, sync files should be generated by `dannce/utils/preprocess_data.m`, which will generate sync files from a properly formatted `.mat` file listing the frameID for each camera at each timepoint. See `/dannce/utils/example_matchedframs.mat` file for how these timestamp data should be formatted before running `preprocess_data.m`.
If your cameras are not natively synchronized, but you can collect timestamps for each frame, sync files should be generated by `dannce/utils/preprocess_data.m`, which will generate sync files from a properly formatted `.mat` file listing the frameID for each camera at each timepoint. See `/dannce/utils/example_matchedframs.mat` file for how these timestamp data should be formatted before running `preprocess_data.m`.

## Hand-Labeling
For fine-tuning DANNCE to work with your animal and system, we developed a labeling GUI, which can be found in a separate repo: https://github.com/diegoaldarondo/Label3D. The `Label3D` repository should be cloned with DANNCE automatically as a submodule when using `git clone --recursive https://github.com/spoonsso/dannce` When labeling is completed, the labels can be used to train DANNCE and the COMfinder network (see below) after converting the Label3D files to DANNCE format using `Label3D.exportDannce()`.
Expand Down
6 changes: 3 additions & 3 deletions calibration/acquire_calibration_3cam_mouse_clean.m
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@
%preview(vid{1});


%% get matched frames from the cameras wth
%% get matched frames from the cameras with
%% STEP 1- CHECKERBOARD
fprintf('starting in 10 s')
pause(10)
Expand All @@ -73,7 +73,7 @@
pause(0.5)
end

%% acquire and save postion of markers
%% acquire and save position of markers
%% STEP 2 (Put L-frame in arena first)

input('Hit enter when the grid is in the arena \n')
Expand Down Expand Up @@ -207,7 +207,7 @@
worldLocation = cell(1,numcams);
rotationMatrix = cell(1,numcams);
translationVector = cell(1,numcams);
%% get the orienation and location of cameras
%% get the orientation and location of cameras
for kk = [1:numcams]%:numcams

[worldOrientation{kk},worldLocation{kk}] = estimateWorldCameraPose(double(point_coordinates{kk}),...
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@
%% Use selected points to calculate camera extrinsics
for kk = 1:numel(lframe)
%kk
% Do grid search over parms
% Do grid search over params
curr_err = 1e10;
c_save = 0;
mr_save = 0;
Expand Down
4 changes: 2 additions & 2 deletions configs/dannce_rig_dannce_config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -86,10 +86,10 @@ depth: False

immode: 'vid'

# DANNCE training option. Whetehr to turn on rotation augmentation during training
# DANNCE training option. Whether to turn on rotation augmentation during training
rotate: True

# If true, intializes an "AVG" version of the network (i.e. final spatial expected value output layer). If false, "MAX" version
# If true, initializes an "AVG" version of the network (i.e. final spatial expected value output layer). If false, "MAX" version
expval: True


Expand Down
2 changes: 1 addition & 1 deletion dannce/cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -261,7 +261,7 @@ def add_shared_train_args(
"--data-split-seed",
dest="data_split_seed",
type=int,
help="Integer seed for the random numebr generator controlling train/test data splits",
help="Integer seed for the random number generator controlling train/test data splits",
)
parser.add_argument(
"--valid-exp",
Expand Down
2 changes: 1 addition & 1 deletion dannce/engine/generator.py
Original file line number Diff line number Diff line change
Expand Up @@ -1066,7 +1066,7 @@ def pj_grid_mirror(self, X_grid, camname, ID, experimentID, thisim):

def pj_grid_post(self, X_grid, camname, ID, experimentID,
com, com_precrop, thisim):
# separate the porjection and sampling into its own function so that
# separate the projection and sampling into its own function so that
# when mirror == True, this can be called directly
if self.crop_im:
if self.torch.all(self.torch.isnan(com)):
Expand Down
6 changes: 3 additions & 3 deletions dannce/engine/generator_aux.py
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,7 @@ def __data_generation(self, list_IDs_temp):
)

# We'll need to transpose this later such that channels are last,
# but initializaing the array this ways gives us
# but initializing the array this ways gives us
# more flexibility in terms of user-defined array sizes\
if self.labelmode == "prob":
y = np.empty(
Expand Down Expand Up @@ -244,7 +244,7 @@ def __data_generation(self, list_IDs_temp):
y = np.transpose(y, [0, 2, 3, 1])

if self.mirror:
# separate the batches from the cameras, and use the cameras as the numebr of channels
# separate the batches from the cameras, and use the cameras as the number of channels
# to make a single-shot multi-target prediction from a single image
y = np.reshape(y, (self.batch_size, len(self.camnames[0]), y.shape[1], y.shape[2]))
y = np.transpose(y, [0, 2, 3, 1])
Expand Down Expand Up @@ -393,7 +393,7 @@ def __data_generation(self, list_IDs_temp):
)

# We'll need to transpose this later such that channels are last,
# but initializaing the array this ways gives us
# but initializing the array this ways gives us
# more flexibility in terms of user-defined array sizes\
if self.labelmode == "prob":
y = np.empty(
Expand Down
12 changes: 6 additions & 6 deletions dannce/engine/inference.py
Original file line number Diff line number Diff line change
Expand Up @@ -373,10 +373,10 @@ def triangulate_single_instance(
"""Triangulate for a single instance.

Args:
n_cams (int): Numver of cameras
n_cams (int): Number of cameras
sample_id (Text): Sample identifier.
params (Dict): Parameters dictionary.
camera_mats (Dict): Camera matrices dictioanry.
camera_mats (Dict): Camera matrices dictionary.
save_data (Dict): Saved data dictionary.

No Longer Returned:
Expand Down Expand Up @@ -409,10 +409,10 @@ def triangulate_multi_instance_multi_channel(
"""Triangulate for multi-instance multi-channel.

Args:
n_cams (int): Numver of cameras
n_cams (int): Number of cameras
sample_id (Text): Sample identifier.
params (Dict): Parameters dictionary.
camera_mats (Dict): Camera matrices dictioanry.
camera_mats (Dict): Camera matrices dictionary.
save_data (Dict): Saved data dictionary.

No Longer Returned:
Expand Down Expand Up @@ -467,10 +467,10 @@ def triangulate_multi_instance_single_channel(
"""Triangulate for multi-instance single-channel.

Args:
n_cams (int): Numver of cameras
n_cams (int): Number of cameras
sample_id (Text): Sample identifier.
params (Dict): Parameters dictionary.
camera_mats (Dict): Camera matrices dictioanry.
camera_mats (Dict): Camera matrices dictionary.
cameras (Dict): Camera dictionary.
save_data (Dict): Saved data dictionary.

Expand Down
4 changes: 2 additions & 2 deletions dannce/engine/losses.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ def mask_nan(y_true, y_pred):


def mask_nan_keep_loss(y_true, y_pred):
"""Mask out nan values in the calulation of MSE."""
"""Mask out nan values in the calculation of MSE."""
y_pred, y_true, num_notnan = mask_nan(y_true, y_pred)
loss = K.sum((K.flatten(y_pred) - K.flatten(y_true)) ** 2) / num_notnan
return tf.where(~tf.math.is_nan(loss), loss, 0)
Expand Down Expand Up @@ -53,7 +53,7 @@ def metric_dist_max(y_true, y_pred):
"""Get distance between the (row, col) indices of each maximum.

y_true and y_pred are image-sized confidence maps.
Let's get the (row, col) indicies of each maximum and calculate the
Let's get the (row, col) indices of each maximum and calculate the
distance between the two
"""
x = K.reshape(y_true, [K.int_shape(y_true)[0], -1])
Expand Down
14 changes: 7 additions & 7 deletions dannce/engine/ops.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ def camera_matrix(K: np.ndarray, R: np.ndarray, t: np.ndarray) -> np.ndarray:
"""Derive the camera matrix.

Derive the camera matrix from the camera intrinsic matrix (K),
and the extrinsic rotation matric (R), and extrinsic
and the extrinsic rotation matrix (R), and extrinsic
translation vector (t).

Note that this uses the matlab convention, such that
Expand All @@ -33,7 +33,7 @@ def project_to2d(
"""Project 3d points to 2d.

Projects a set of 3-D points, pts, into 2-D using the camera intrinsic
matrix (K), and the extrinsic rotation matric (R), and extrinsic
matrix (K), and the extrinsic rotation matrix (R), and extrinsic
translation vector (t). Note that this uses the matlab
convention, such that
M = [R;t] * K, and pts2d = pts3d * M
Expand All @@ -50,7 +50,7 @@ def project_to2d_torch(pts, M: np.ndarray, device: Text) -> torch.Tensor:
"""Project 3d points to 2d.

Projects a set of 3-D points, pts, into 2-D using the camera intrinsic
matrix (K), and the extrinsic rotation matric (R), and extrinsic
matrix (K), and the extrinsic rotation matrix (R), and extrinsic
translation vector (t). Note that this uses the matlab
convention, such that
M = [R;t] * K, and pts2d = pts3d * M
Expand All @@ -71,7 +71,7 @@ def project_to2d_tf(projPts, M):
"""Project 3d points to 2d.

Projects a set of 3-D points, pts, into 2-D using the camera intrinsic
matrix (K), and the extrinsic rotation matric (R), and extrinsic
matrix (K), and the extrinsic rotation matrix (R), and extrinsic
translation vector (t). Note that this uses the matlab
convention, such that
M = [R;t] * K, and pts2d = pts3d * M
Expand Down Expand Up @@ -561,7 +561,7 @@ def unDistortPoints(
def triangulate(pts1, pts2, cam1, cam2):
"""Return triangulated 3- coordinates.

Following Matlab convetion, given lists of matching points, and their
Following Matlab convention, given lists of matching points, and their
respective camera matrices, returns the triangulated 3- coordinates.
pts1 and pts2 must be Mx2, where M is the number of points with
(x,y) positions. M 3-D points will be returned after triangulation
Expand Down Expand Up @@ -599,7 +599,7 @@ def triangulate(pts1, pts2, cam1, cam2):
def triangulate_multi_instance(pts, cams):
"""Return triangulated 3- coordinates.

Following Matlab convetion, given lists of matching points, and their
Following Matlab convention, given lists of matching points, and their
respective camera matrices, returns the triangulated 3- coordinates.
pts1 and pts2 must be Mx2, where M is the number of points with
(x,y) positions. M 3-D points will be returned after triangulation
Expand Down Expand Up @@ -754,7 +754,7 @@ def proj_slice(
# x,y,z position. In my case, there should only be as many grids as there
# are samples in the mini-batch,
# but for some reason this code allows multiple 3D grids per sample.
# the order in rows (for the last 3 cols) should be rougly like this:
# the order in rows (for the last 3 cols) should be roughly like this:
# [batch1_grid1_allcam1samples_locs, batch1_grid1_allcam2sample_locs,
# batch1_grid1_allcam3sample_locs, batch1_grid2_allcam1samples_locs, ...]
g_val = nearest3(sample_grid, sample_idx, clip=True)
Expand Down
4 changes: 2 additions & 2 deletions dannce/engine/processing.py
Original file line number Diff line number Diff line change
Expand Up @@ -559,7 +559,7 @@ def prepare_save_metadata(params):
the 'experiment' field
"""

# Need to convert None to string but still want to conserve the metadat structure
# Need to convert None to string but still want to conserve the metadata structure
# format, so we don't want to convert the whole dict to a string
meta = params.copy()

Expand Down Expand Up @@ -1215,7 +1215,7 @@ def savedata_tomat(

if addCOM is not None:
# We use the passed comdict to add back in the com, this is useful
# if one wnats to bootstrap on these values for COMnet or otherwise
# if one wants to bootstrap on these values for COMnet or otherwise
for i in range(len(sID)):
pred_out_world[i] = pred_out_world[i] + addCOM[int(sID)][:, np.newaxis]

Expand Down
4 changes: 2 additions & 2 deletions dannce/engine/serve_data_DANNCE.py
Original file line number Diff line number Diff line change
Expand Up @@ -354,7 +354,7 @@ def prepare_COM(
elif method == "median":
com3d = np.nanmedian(com3d, axis=1)
else:
raise Exception("Uknown 3D COM method")
raise Exception("Unknown 3D COM method")

com3d_dict[key] = com3d
else:
Expand Down Expand Up @@ -397,7 +397,7 @@ def remove_samples(s, d3d, mode="clean", auxmode=None):
mode == 'SpineM' means only remove data where SpineM is missing
mode == 'liberal' means include any data that isn't *all* nan
aucmode == 'JDM52d2' removes a really bad marker period -- samples 20k to 32k
I need to cull the samples array (as this is used to index eveyrthing else),
I need to cull the samples array (as this is used to index everything else),
but also the
data_3d_ array that is used to for finding clusters
"""
Expand Down
8 changes: 4 additions & 4 deletions dannce/interface.py
Original file line number Diff line number Diff line change
Expand Up @@ -590,7 +590,7 @@ def com_train(params: Dict):
)

def write_debug(trainData=True):
"""Factoring re-used debug output code.
"""Factoring reused debug output code.

Writes training or validation images to an output directory, together
with the ground truth COM labels and predicted COM labels, respectively.
Expand Down Expand Up @@ -687,7 +687,7 @@ def dannce_train(params: Dict):
params["net"] = getattr(nets, params["net"])

# Default to 6 views but a smaller number of views can be specified in the
# DANNCE config. If the legnth of the camera files list is smaller than
# DANNCE config. If the length of the camera files list is smaller than
# n_views, relevant lists will be duplicated in order to match n_views, if
# possible.
n_views = int(params["n_views"])
Expand Down Expand Up @@ -1276,7 +1276,7 @@ def dannce_predict(params: Dict):
"""Predict with dannce network

Args:
params (Dict): Paremeters dictionary.
params (Dict): Parameters dictionary.
"""
# Depth disabled until next release.
params["depth"] = False
Expand All @@ -1291,7 +1291,7 @@ def dannce_predict(params: Dict):
netname = params["net"]
params["net"] = getattr(nets, params["net"])
# Default to 6 views but a smaller number of views can be specified in the DANNCE config.
# If the legnth of the camera files list is smaller than n_views, relevant lists will be
# If the length of the camera files list is smaller than n_views, relevant lists will be
# duplicated in order to match n_views, if possible.
n_views = int(params["n_views"])

Expand Down
2 changes: 1 addition & 1 deletion dannce/utils/makeStructuredData.py
Original file line number Diff line number Diff line change
Expand Up @@ -169,7 +169,7 @@
# faster intersections are tricky given that matched_frames are repeated multiple times
# another shortcut we are taking here is that we take a single mocap value rather than an average
# over all samples for a given frame
# Another problem is that the mframes might jump from oen file to the next, leaving a big gap
# Another problem is that the mframes might jump from one file to the next, leaving a big gap
# relative to predictions
dfsi = np.logical_and(dframe >= np.min(mframes), dframe <= np.max(mframes))
dframe_sub = dframe[dfsi]
Expand Down
4 changes: 2 additions & 2 deletions dannce/utils/makeStructuredData_DLC.py
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@
# # The pred_dlc frame indices go 0:pred_dlc.shape[0]
# #
# # So we walk thru the dannce pred sampleIDs (we need to track these because
# # some could be discarded due to COM error thresholding), find its matchign frame in the matched frames,
# # some could be discarded due to COM error thresholding), find its matching frame in the matched frames,
# # then associate the pred_dlc prediction with that frame
# frames_dlc = np.arange(pred_dlc.shape[0])

Expand All @@ -129,7 +129,7 @@
# # raise Exception("Could not find sampleID in matched frames")
# raise Exception("Could not find sampleID in pred sampleIDs")

# Make sure we onyl take sampleIDs that are also in the DANNCE predictions
# Make sure we only take sampleIDs that are also in the DANNCE predictions
dlc["sampleID"], indies, _ = np.intersect1d(
dlc["sampleID"], pred["sampleID"], return_indices=True
)
Expand Down
2 changes: 1 addition & 1 deletion dannce/utils/makeSyncFiles.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

Use this script if you know your camera frames are triggered and reliably synchronized.

If your cameras are not natively synchronized, but you can collect timestaps for each
If your cameras are not natively synchronized, but you can collect timestamps for each
frame, MatchedFrames files should be generated by `preprocess_data.m`, together
with a formatted `.mat` file listing the frameID for each camera and each timepoint.
See `/dannce/utils/example_matchedframes.mat` file for how these timestamp data
Expand Down
2 changes: 1 addition & 1 deletion multi-camera-calibration/calibration_extrinsic_Lframe.m
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@
%% Use selected points to calculate camera extrinsics
for kk = 1:numel(lframe)
%kk
% Do grid search over parms
% Do grid search over params
curr_err = 1e10;
c_save = 0;
mr_save = 0;
Expand Down
2 changes: 1 addition & 1 deletion tests/configs/base_config_temp.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@ rotate: True
# Whether to apply lens distortion during sampling. Default True
distort: True

# If true, intializes an "AVG" version of the network (i.e. final spatial expected value output layer). If false, "MAX" version
# If true, initializes an "AVG" version of the network (i.e. final spatial expected value output layer). If false, "MAX" version
expval: False

# COM finder output confidence scores less than this threshold will be discarded
Expand Down
Loading