You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We are running into some memory issues while running hifiasm, probably due to the large amount of data. Our genome is around 2.2 Gb sequenced to about 150x coverage on 3 cells. This means there are 3 fastq files around 220 GB each. Unfortunately we are mostly limited to compute nodes with < 500 GB memory and the memory usage for the error correction step seems to be well in excess of 500 GB.
FYI, read quality looks good:
Would it be possible to separately generate error corrected reads for each batch of raw reads and then recombine them for assembly? Would you advise against this?
Thanks,
John
The text was updated successfully, but these errors were encountered:
Hi there,
We are running into some memory issues while running hifiasm, probably due to the large amount of data. Our genome is around 2.2 Gb sequenced to about 150x coverage on 3 cells. This means there are 3 fastq files around 220 GB each. Unfortunately we are mostly limited to compute nodes with < 500 GB memory and the memory usage for the error correction step seems to be well in excess of 500 GB.
FYI, read quality looks good:
Would it be possible to separately generate error corrected reads for each batch of raw reads and then recombine them for assembly? Would you advise against this?
Thanks,
John
The text was updated successfully, but these errors were encountered: