Skip to content

Running N2V prediction on a GPU cluster fails #154

@Eddymorphling

Description

@Eddymorphling

Hi folks,
I have a trained 3D-N2V that I would like to use to predict on a folder that has around 100, 3D tiff stacks. I am trying to run the predictions using GPU compute resources on my local HPC cluster using the CLI I found in the N2V-wiki. Here is what I try to run

python /home/N2V_predict.py --fileName=*.tif --dataPath='/Raw_data' --tile=4 --dims=ZYX --baseDir='/Analysis/N2V_model' --name=N2V-3D

Strangely this job fails and I see that my GPU runs out of memory. Does it try to load all 100 files at once and runs out of memory? I am not sure. When I run this job on an interactive-GPU node using the napari-n2v plugin, it works well as it only run predictions on one file at a time. Any clue how I can get it working using a non-interactive HPC cluster run? Thank you.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions