This repository contains a comprehensive pipeline for 3D pose estimation and tracking using a multi-camera mirror setup. The system processes video data from multiple camera views to reconstruct 3D coordinates of animal poses, with particular focus on mouse behavior analysis in hunting scenarios.
3d-setup/
├── complete_pipeline/ # Main processing pipeline scripts
├── threed_utils/ # Core utility modules
├── scripts/ # Additional processing scripts
├── notebooks/ # Analysis and debugging notebooks
├── tests/ # Test files and sample data
├── build/ # Build artifacts
└── requirements.txt # Python dependencies
The processing pipeline consists of four main stages:
- Scripts:
0a_define_cropping.py,0b_process_videos.py - Purpose: Interactive cropping of multi-view videos using Napari GUI
- Output: Cropped video files with defined regions of interest
- Scripts:
1_extract_checkerboards.py,2_calibration_multicam_script.py - Purpose: Extract checkerboard patterns and perform multi-camera calibration
- Output: Camera calibration parameters and intrinsic/extrinsic matrices
- Scripts:
3_triangulation_multicam_anipose_script.py,3_triangulation.py - Purpose: Triangulate 2D keypoints to 3D coordinates using Anipose
- Output: 3D pose data in movement format
- Tools: Various notebooks and visualization scripts
- Purpose: Analyze 3D trajectories, generate plots, and validate results
- Purpose: Data input/output operations
- Key Functions:
movement_ds_from_anipose_triangulation_df(): Convert Anipose triangulation to movement datasetread_calibration_toml(): Load camera calibration parameterswrite_calibration_toml(): Save calibration parameters
- Purpose: Arena triangulation and visualization
- Key Functions:
load_arena_coordinates(): Load arena coordinate definitionsload_arena_multiview_ds(): Create movement dataset from arena coordinatestriangulate_arena_points(): Triangulate arena reference points
triangulate.py: Core triangulation functionscalibrate.py: Camera calibration utilitiesanipose_filtering_2d.py: 2D filtering and preprocessingmovement_anipose.py: Integration with movement library
detection.py: Checkerboard detection algorithmscalibration.py: Camera calibration proceduresbundle_adjustment.py: Bundle adjustment optimizationgeometry.py: Geometric transformationsviz.py: Calibration visualization
- Purpose: Interactive visualization and analysis in Napari
- Features: Layer styles, loader widgets, metadata widgets
run_2d_filter.py: 2D filtering pipelinecheck_triangulation.py: Triangulation validationtest_arena_triangulation.py: Arena triangulation testing
run_dlc_inference.py: DLC inference executionrun_dlc_training.py: DLC model trainingconvert_labels_sleap2dlc.py: Format conversion utilities
model_inference.py: SLEAP model inferencesleap_training.py: SLEAP model trainingcrop_and_inference.py: Cropped video inference
@dataclass
class CroppingOptions:
crop_folder_pattern: str = "cropped-v2"
expected_views: tuple[str] = ("central", "mirror-bottom", "mirror-left", "mirror-right", "mirror-top")
@dataclass
class DetectionOptions:
board_shape: tuple[int, int] = (5, 7)
match_score_min_diff: float = 0.15
match_score_min: float = 0.4
@dataclass
class CalibrationOptions:
square_size: float = 12.5
scale_factor: float = 0.5
n_samples_for_intrinsics: int = 100
ftol: float = 1e-4Core Requirements (requirements.txt):
numpy: Numerical computationspandas: Data manipulationnapari: Interactive visualizationmovement: Pose data handlingPyYAML: Configuration filesvidio: Video I/Oflammkuchen: Data serializationtoml: TOML file handlinganiposelib: 3D pose estimationhickle: HDF5 serialization
0a_define_cropping.py
- Purpose: Interactive definition of cropping windows using Napari GUI
- Usage:
python 0a_define_cropping.py /path/to/movie.avi - Output: JSON file with FFmpeg cropping parameters
- Features:
- Interactive window selection for 5 camera views (central, mirror-top, mirror-bottom, mirror-left, mirror-right)
- Real-time preview of cropping results
- Automatic transformation filters for mirror views
0b_process_videos.py
- Purpose: Batch processing of videos using defined cropping parameters
- Usage:
python 0b_process_videos.py /path/to/videos /path/to/cropping_parameters.json - Features:
- Processes all AVI files in specified directory
- Skips already processed files (with
_cropped_suffix) - Applies view-specific transformations (mirror flips, rotations)
- Generates timestamped output folders
0c_opt_check_video_integrity.py
- Purpose: Validate cropped videos against original frame counts
- Usage:
python 0c_opt_check_video_integrity.py /path/to/folder [options] - Features:
- Compares frame counts between original and cropped videos
- Reports mismatches, missing files, and errors
- Generates integrity reports
1_extract_checkerboards.py
- Purpose: Extract checkerboard patterns from calibration videos
- Usage:
python 1_extract_checkerboards.py /path/to/calibration/videos [options] - Options:
--board-shape: Checkerboard dimensions (default: 5,7)--match-score-min: Minimum detection confidence (default: 0.4)--video-extension: Video file extension (default: mp4)
- Output: Detection results saved as HDF5 files
2_calibration_multicam_script.py
- Purpose: Perform multi-camera calibration using checkerboard detections
- Usage:
python 2_calibration_multicam_script.py /path/to/detections - Features:
- Intrinsic and extrinsic camera parameter estimation
- Bundle adjustment optimization
- Calibration quality assessment
- TOML format calibration output
3_triangulation_multicam_anipose_script.py
- Purpose: Triangulate 2D poses to 3D coordinates using Anipose
- Usage:
python 3_triangulation_multicam_anipose_script.py /path/to/2d/poses /path/to/calibration - Features:
- Multi-camera triangulation with confidence scoring
- 2D filtering and outlier detection
- 3D pose reconstruction
- Movement dataset output
3_triangulation.py
- Purpose: Simplified triangulation workflow
- Usage:
python 3_triangulation.py - Features: Streamlined triangulation for testing and development
run_dlc_inference.py
- Purpose: Run DeepLabCut inference on video folders
- Usage:
python run_dlc_inference.py config.yaml /path/to/videos [options] - Options:
--make-labeled-video: Generate labeled output videos--shuffle-n: Shuffle index (default: 2)--batch-size: Inference batch size (default: 2)
run_dlc_training.py
- Purpose: Train DeepLabCut models
- Usage:
python run_dlc_training.py config.yaml - Features: Automated model training pipeline
convert_labels_sleap2dlc.py
- Purpose: Convert SLEAP labels to DeepLabCut format
- Usage:
python convert_labels_sleap2dlc.py /path/to/sleap/labels /path/to/output
model_inference.py
- Purpose: Run SLEAP model inference on videos
- Usage:
python model_inference.py /path/to/videos - Features:
- Automatic video discovery
- Batch processing with progress tracking
- Side view and bottom view model support
sleap_training.py
- Purpose: Train SLEAP models
- Usage:
python sleap_training.py config.json
run_2d_filter.py
- Purpose: Apply 2D filtering to pose data
- Usage:
python run_2d_filter.py /path/to/poses - Features: Confidence-based filtering and outlier removal
check_triangulation.py
- Purpose: Validate triangulation results
- Usage:
python check_triangulation.py /path/to/triangulated/data - Features: Quality assessment and visualization
backprojection_gen.py
- Purpose: Generate 2D backprojection visualizations
- Usage:
python backprojection_gen.py /path/to/3d/data /path/to/calibration - Features: 3D to 2D projection for validation
backproject_2_napari_plugin.py
- Purpose: Napari plugin for backprojection visualization
- Usage: Load as Napari plugin
- Features: Interactive 3D pose visualization
merging_videos.py (in scripts/lighting/)
- Purpose: Merge multiple video files
- Usage:
python merging_videos.py /path/to/videos /path/to/output - Features: Video concatenation and synchronization
reencode_h264.py
- Purpose: Re-encode videos to H.264 format
- Usage:
python reencode_h264.py /path/to/input /path/to/output - Features: Format conversion and compression
debug_coordinates_transformation.ipynb: Coordinate system debuggingdebug_cropping.ipynb: Cropping validation and visualizationdebug_detection.ipynb: Detection quality assessmentdebug_triangulation_files.ipynb: Triangulation validation
data_analysis_mergining_predictions.ipynb: Prediction merging and analysisfiltering.ipynb: Data filtering techniquesdlc_plots.ipynb: DeepLabCut result visualizationsleap_plots.ipynb: SLEAP result visualization
open_calibration.ipynb: Calibration data inspection2a_check_calibration.ipynb: Calibration validationback_projection.ipynb: Backprojection analysis
# 1. Define cropping parameters
python 0a_define_cropping.py /path/to/calibration_video.avi
# 2. Process all videos
python 0b_process_videos.py /path/to/videos /path/to/cropping_params.json
# 3. Check video integrity
python 0c_opt_check_video_integrity.py /path/to/processed/videos
# 4. Extract checkerboards
python 1_extract_checkerboards.py /path/to/calibration/videos
# 5. Perform calibration
python 2_calibration_multicam_script.py /path/to/checkerboard/detections
# 6. Run pose detection (SLEAP or DLC)
python scripts/sleap/model_inference.py /path/to/cropped/videos
# 7. Triangulate to 3D
python 3_triangulation_multicam_anipose_script.py /path/to/2d/poses /path/to/calibration# Run DLC inference
python scripts/dlc/run_dlc_inference.py config.yaml /path/to/videos --make-labeled-video
# Apply 2D filtering
python scripts/anipose/run_2d_filter.py /path/to/poses
# Generate backprojections
python scripts/backprojection_gen.py /path/to/3d/data /path/to/calibration
# Check triangulation quality
python scripts/anipose/check_triangulation.py /path/to/triangulated/data- Videos: Multi-view AVI/MP4 files with synchronized cameras
- Calibration: Checkerboard calibration videos
- 2D Poses: SLEAP or DeepLabCut output files (.slp, .h5)
- 3D Poses: Movement-format datasets with 3D coordinates
- Calibration: TOML files with camera parameters
- Visualizations: Plots and videos with 3D trajectories
The system uses a standardized keypoint schema for mouse poses:
- Head: nose, ear_lf, ear_rt
- Body: back_rostral, back_mid, back_caudal, belly_rostral, belly_caudal
- Limbs: forepaw_lf, forepaw_rt, hindpaw_lf, hindpaw_rt
- Tail: tailbase
- Interactive 3D pose visualization
- Multi-layer data overlay
- Real-time parameter adjustment
- Export capabilities
frame_plots.py: Frame-by-frame visualizationanimation_tools.py: 3D animation generationbackprojection.py: 2D backprojection visualization
- Sample calibration files in
tests/assets/ - Example video data for validation
- Reference triangulation results
test_triangulation.py: Triangulation accuracy teststest_example.py: Basic functionality tests
pip install -e .pip install -e .[dev]- Black formatting
- isort import sorting
- Type hints throughout
data/
├── calibration/
│ ├── cropping_params.json
│ └── 20250509/
│ └── calibration session data
├── M29, M30, M31/
│ ├── cricket/
│ │ ├── *.csv # timestamps
│ │ ├── *.avi # original video
│ │ └── 133050/
│ │ ├── multicam_video_*_cropped-v2_*/
│ │ │ ├── cropped videos (.mp4)
│ │ │ └── Tracking/
│ │ │ ├── Central: full.pickle, meta.pickle, snapshot.h5
│ │ │ ├── Sides: full.pickle, meta.pickle, snapshot.h5
│ │ │ └── Triangulations: datetime.h5
│ └── object/
│ └── (same structure)
output/
├── mc_calibration_output_*/
│ ├── all_calib_uvs.npy
│ ├── calibration_from_mc.toml
│ └── calibration_plots/
├── triangulation_results/
│ ├── 3d_poses.h5
│ └── validation_plots/
└── analysis_results/
├── trajectory_plots/
└── statistics/
- Calibration Quality: Ensure checkerboard is visible in all views
- Synchronization: Verify camera synchronization
- Cropping Consistency: Use same cropping for calibration and data videos
- Memory Usage: Large videos may require chunked processing
debug_coordinates_transformation.ipynb: Coordinate system debuggingdebug_cropping.ipynb: Cropping validationdebug_detection.ipynb: Detection quality assessmentdebug_triangulation_files.ipynb: Triangulation validation
This documentation was generated automatically from the codebase structure and content.