Been having problems of all sorts trying to do de novo assembly of transcriptome data on my cluster. It might be possible that not enough RAM is the problem.. apparently at BGI they have 512 GB ram beasts.
I think it might be worthwhile to explore computing algorithm changes rather than hardware upgrades.
after all there comes a point when the cost far exceeds the "worthiness" of an experiment.
Contrail: Assembly of Large Genomes using Cloud Computing
[excerpt .... Preliminary results show Contrail’s contigs are of similar size and quality to those generated by Velvet when applied to small (bacterial) genomes, but provides vastly superior scaling capabilities when applied to large genomes....]
CloudBurst: Highly Sensitive Short Read Mapping with MapReduce
[excerpt ...CloudBurst's running time scales linearly with the number of reads mapped, and with near linear speedup as the number of processors increases. In a 24-processor core configuration, CloudBurst is up to 30 times faster than RMAP executing on a single core...]