|
[Sponsors] |
March 3, 2017, 17:24 |
SU2 Scalability
|
#1 |
New Member
Oliver V
Join Date: Dec 2015
Posts: 17
Rep Power: 10 |
Hello,
I am trying to solve a steady RANS airflow around a 3D Wing with 5 million nodes (15 M elts, unstructured cgns). I have acces to a cluster, however it seems that after a given amount of cores (192) the solver has problems in the MPI communications and memory management department. It usually exits with a: Code:
mpirun noticed that process rank XX with PID YYYYY on node ab1234 exited on signal 11 (Segmentation fault). I'm running on a cluster with 24 ppn and 32GB per node - swiching from 8 to 12 nodes to find the one configuration that *might* work -. A problem of this size shouldn't be difficult to solve for this type of cluster... I asked around and a colleague solved a DNS simulation with around 200M nodes in roughly 100h with 144 cores (2 years ago), yet it takes me roughly 60h for a 5M node compressible steady RANS-SA simulation with 192 cores... which sometimes crashes because of single thread memory issues. Has someone had problems scaling SU2 (using several processors)? How many processors have you managed to use at max? Have you done an efficiency test (to find the point where you no longer gain time with parallel calculations)? What about memory usage? Thanks, Oliver. |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Official Release of SU2 V3.2 | economon | SU2 News & Announcements | 6 | April 17, 2015 23:28 |
Tool to download: SU2 post-processing | Combas | SU2 | 2 | June 5, 2014 14:55 |
Pointwise-SU2 joint webinar (April 29th) and SU2 v3.1.0 new release | fpalacios | SU2 News & Announcements | 1 | April 30, 2014 02:40 |
Official release of SU2 V3.0 and SU2 Educational V1.0 | fpalacios | SU2 News & Announcements | 2 | January 22, 2014 05:28 |
Welcome to the Stanford University Unstructured (SU2) forum! | economon | SU2 | 0 | January 7, 2013 02:48 |