|
[Sponsors] |
August 16, 2016, 13:51 |
preprocessor taking too much memory
|
#1 |
New Member
Adam Jirasek
Join Date: Mar 2011
Posts: 18
Rep Power: 15 |
Hi folks
I have a question - I'm running a case, the mesh is 13,7 million coordinates, 14,33 million tetra cells, 22 million prism cells and some 200,000 pyramids. I'm running on a machine with 65 GB memory, I execute SU2 by invoking a command mpirun -n 8 SU2_CFD cfd_file What happens is it quickly consumes all the memory. I tried even two partitions, would be the same. Anyone having experiences with running these kind of meshes and SU2? Thanks Adam |
|
August 18, 2016, 17:41 |
|
#2 | |
Senior Member
Heather Kline
Join Date: Jun 2013
Posts: 309
Rep Power: 13 |
Quote:
Unfortunately it sounds like you might be running up against the memory limits of your system. Outside of reducing the mesh size or moving to a system with more memory, you might try the low-memory and binary output options - I'm not sure how much this will help in preprocessing since these options are more about controlling the size of output files, but it's worth a try. If you are not already using version 4.2.0, there were some memory improvements, but as most of that was at the close of the simulation again I'm not sure if it will help with the preprocessing steps. The options I was referring to are: OUTPUT_FORMAT = TECPLOT_BINARY LOW_MEMORY_OUTPUT = YES Longer term, it would be useful for many users for SU2 to have a standard estimate of memory required per mesh node, but I don't think anyone has investigated that deeply, and in any case any estimate would vary depending on the simulation type, dimensions, number of cores, code version, etc. |
||
August 18, 2016, 18:33 |
|
#3 |
New Member
Adam Jirasek
Join Date: Mar 2011
Posts: 18
Rep Power: 15 |
Hi Heather
Good to hear from you. I tried options you recommend and it did not change anything. The fact is that I'm running out of memory before SU2 saves anything on the disc. At the moment it does not look like a number of processors would effect the preprocessing process. I'll try with smaller case and see where I end up. Do you know what was the largest case which was ever solved with SU2? I'm just curious Thanks Adam |
|
August 18, 2016, 19:53 |
|
#4 |
Super Moderator
Thomas D. Economon
Join Date: Jan 2013
Location: Stanford, CA
Posts: 271
Rep Power: 14 |
Adam,
We'll be releasing v4.3.0 soon which has a few more improvements that you can try (all preprocessing spikes in memory are gone and the solve portion is the peak due to the Jacobian). In the meantime, do you have access to another solver, such as EDGE, that we could test the exact same mesh to find relative memory usage between the codes? Thanks, Tom |
|
August 19, 2016, 10:38 |
|
#5 |
New Member
Adam Jirasek
Join Date: Mar 2011
Posts: 18
Rep Power: 15 |
Hi Tom
The rule of thumb we have for Edge (and it is actually fairly accurate) is about 1GB RAM per 1 million mesh nodes. Now I tested a mesh of 330,000 points, around 2million elements and it looks the maximum usage of the memory was about 5.13GB when running on a single processor. When running on four processors the load is 2 processors around 5 GB two other processors around 2.24 GB Adam |
|
Thread Tools | Search this Thread |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Memory bandwidth and memory interleaving | Sly | Hardware | 2 | February 19, 2015 14:41 |
Solver error message - memory usage | Jared1986 | CFX | 12 | December 2, 2014 06:28 |
Run-time memory allocation error | akalopsis | CFX | 0 | November 17, 2014 18:17 |
Lenovo C30 memory configuration and discussions with Lenovo | matthewe | Hardware | 3 | October 17, 2013 11:23 |
CFX CPU time & real time | Nick Strantzias | CFX | 8 | July 23, 2006 18:50 |