Running metaSPAdes with 71M paired-end reads, advice on RAM and command parameters #1516
Unanswered
suma-h-bio
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I am a beginner in shotgun metagenomic analysis, and I am currently running metaSPAdes on my dataset.
Input size: ~71,251,478 paired-end reads (1 sample)
System specs: 125 GB RAM available + 2 TB swap memory
Command used:
metaspades.py -1 sample_R1.fastq.gz -2 sample_R2.fastq.gz
-k 21,33,55,77,99,127 -t 16 -m 250
I have a couple of doubts:
Is it okay that I did not specify --only-assembler (I ran with default error correction)?
Will metaSPAdes run correctly with my specs, or should I adjust -m, k-mer values, or other parameters?
Since I am new to shotgun metagenomics, do you recommend any optimized settings for large datasets like mine?
Any guidance will be very helpful 🙏
Thank you!
Beta Was this translation helpful? Give feedback.
All reactions