CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   SU2 (https://www.cfd-online.com/Forums/su2/)
-   -   Mesh deformation memory requirements (https://www.cfd-online.com/Forums/su2/199952-mesh-deformation-memory-requirements.html)

aa.g March 20, 2018 08:36

Mesh deformation memory requirements
 
Dear all,

I am performing a static FSI calculation where I couple SU2 with an external structural solver using the CFluidDriver class through the python wrapper. Once the structural deformations are applied to the fluid surface mesh, the volume mesh is deformed using the StaticMeshUpdate method (I have found element stiffness based on inverse wall distance to be more robust for this).

My progress bottlenecks here, due to seemingly excessive memory requirements of the mesh deformation routine (beats my 32GB RAM for a relatively small 3M cell mesh ...). Having tried several workarounds, I have come to the following observations:
  • Process crashes ("Killed" when running in serial, or a more informative "Exit status 9" when running in parallel) after calculation of the element volumes/wall distances: perhaps during the assembly of the stiffness matrix ? System Load Viewer spikes to >99.8% RAM Memory before crashing.
  • For smaller deformations, it might happen that the deformation proceeds successfully.
  • Increasing the number of deformation increments (DEFORM_NONLINEAR_ITER) seems to affect the issue as well. This is most likely related to the previous observation.
I hope this can give a reasonable overview of the context. The problem appears both with 5.0.0 and 6.0.0. Having struggled with this for a moment now, I would like to know the following :
  • Are the reported memory requirements reasonable for the described case (over 32GB RAM for ~3M cell volume mesh deformation) ? If so, my machine is simply not sufficient ... although 32GB seems excessive for the calculation at hand.
  • What settings in the Grid Deformation Parameters section of the .cfg could allow me to reduce these memory requirements ? Perhaps my setting of the solver (FGMRES), preconditioner (ILU) or the remaining parameters are not well-suited, although I suspect it is irrelevant if the problem is a large stiffness matrix.
  • Bonus: Why does the amplitude of the deformation (seem to) affect the memory requirements of the volume mesh movement routine ? Is there perhaps some kind of "radius" outside of which elements are not included in the calculation, which depends on the boundary deformation ?
If anyone has experience with a similar process including the volume mesh deformation, any comment would be very welcome and helpful.

hlk March 25, 2018 22:10

Thanks for your question.
In my experience memory can be an issue for deformation.
I would suggest first setting DEFORM_CONSOLE_OUTPUT to YES in order to confirm that it is crashing at some point during the linear solve.
If that is the problem, you can try:
-using DEFORM_LINEAR_SOLVER=RESTARTED_FGMRES along with LINEAR_SOLVER_RESTART_FREQUENCY = n (it defaults to 10)
-raising the tolerance on the deformation,
-decreasing the number of linear iterations and/or increasing the number of nonlinear iterations.
- set VISUALIZE_DEFORMATION = YES to get some more information on whether there are other issues

FGMRES can use a lot of memory because it is storing information from it's iterations; R-FGMRES gets around this by restarting the solution periodically, using the last iteration as a new initial point.

you might try limiting the number of iterations first, which will help troubleshooting other problems that might be apparent when plotting the output geometry.

Quote:

Originally Posted by aa.g (Post 685878)
Dear all,

I am performing a static FSI calculation where I couple SU2 with an external structural solver using the CFluidDriver class through the python wrapper. Once the structural deformations are applied to the fluid surface mesh, the volume mesh is deformed using the StaticMeshUpdate method (I have found element stiffness based on inverse wall distance to be more robust for this).

My progress bottlenecks here, due to seemingly excessive memory requirements of the mesh deformation routine (beats my 32GB RAM for a relatively small 3M cell mesh ...). Having tried several workarounds, I have come to the following observations:
  • Process crashes ("Killed" when running in serial, or a more informative "Exit status 9" when running in parallel) after calculation of the element volumes/wall distances: perhaps during the assembly of the stiffness matrix ? System Load Viewer spikes to >99.8% RAM Memory before crashing.
  • For smaller deformations, it might happen that the deformation proceeds successfully.
  • Increasing the number of deformation increments (DEFORM_NONLINEAR_ITER) seems to affect the issue as well. This is most likely related to the previous observation.
I hope this can give a reasonable overview of the context. The problem appears both with 5.0.0 and 6.0.0. Having struggled with this for a moment now, I would like to know the following :
  • Are the reported memory requirements reasonable for the described case (over 32GB RAM for ~3M cell volume mesh deformation) ? If so, my machine is simply not sufficient ... although 32GB seems excessive for the calculation at hand.
  • What settings in the Grid Deformation Parameters section of the .cfg could allow me to reduce these memory requirements ? Perhaps my setting of the solver (FGMRES), preconditioner (ILU) or the remaining parameters are not well-suited, although I suspect it is irrelevant if the problem is a large stiffness matrix.
  • Bonus: Why does the amplitude of the deformation (seem to) affect the memory requirements of the volume mesh movement routine ? Is there perhaps some kind of "radius" outside of which elements are not included in the calculation, which depends on the boundary deformation ?
If anyone has experience with a similar process including the volume mesh deformation, any comment would be very welcome and helpful.


aa.g April 6, 2018 10:37

Dear hlk,

Thank you very much for your detailed and helpful answer. Let me get back to you with a quantitative estimate of the improvements in my specific case.

In the meanwhile, I would like to point out another minor issue that I came across when testing my process on a larger model: due to the index iVertex being declared as an unsigned short in several places across the CDriver classes, one runs into overflow errors when more than 65535 surface nodes are "owned" by a single process. Was this intended by the developers ? At the cost of a few bytes, the problem is simply solved by refactoring this variable to unsigned, or even unsigned long.

Quote:

Originally Posted by hlk (Post 686493)
Thanks for your question.
In my experience memory can be an issue for deformation.
I would suggest first setting DEFORM_CONSOLE_OUTPUT to YES in order to confirm that it is crashing at some point during the linear solve.
If that is the problem, you can try:
-using DEFORM_LINEAR_SOLVER=RESTARTED_FGMRES along with LINEAR_SOLVER_RESTART_FREQUENCY = n (it defaults to 10)
-raising the tolerance on the deformation,
-decreasing the number of linear iterations and/or increasing the number of nonlinear iterations.
- set VISUALIZE_DEFORMATION = YES to get some more information on whether there are other issues

FGMRES can use a lot of memory because it is storing information from it's iterations; R-FGMRES gets around this by restarting the solution periodically, using the last iteration as a new initial point.

you might try limiting the number of iterations first, which will help troubleshooting other problems that might be apparent when plotting the output geometry.


hlk April 9, 2018 19:45

For reporting bugs, we suggest using the github issue tracker:
https://github.com/su2code/SU2/issues

You may see a couple of 'issues' that are actually questions more appropriate for the forum - occasionally people get confused about which one to use, or don't know which category their question falls under. The forum is generally used for questions about how to use SU2 (like your original question) that can be answered by anyone with experience with SU2 or CFD, while the issue tracker is meant to be for reporting bugs (like an unsigned short being used where it ought to be unsigned long) that require attention from code developers.

If you would like to fix the problem yourself, please see the developers docs available here:
https://su2code.github.io/docs/home/

jomunkas March 24, 2020 12:47

Follow up
 
Hi,

I encountered a similar issue as I am still using v6. The suggestion from hlk are clear. Thank you. Just one more question:
Is there any setting to use also hard disk memory?





Quote:

Originally Posted by aa.g (Post 685878)
Dear all,

I am performing a static FSI calculation where I couple SU2 with an external structural solver using the CFluidDriver class through the python wrapper. Once the structural deformations are applied to the fluid surface mesh, the volume mesh is deformed using the StaticMeshUpdate method (I have found element stiffness based on inverse wall distance to be more robust for this).

My progress bottlenecks here, due to seemingly excessive memory requirements of the mesh deformation routine (beats my 32GB RAM for a relatively small 3M cell mesh ...). Having tried several workarounds, I have come to the following observations:
  • Process crashes ("Killed" when running in serial, or a more informative "Exit status 9" when running in parallel) after calculation of the element volumes/wall distances: perhaps during the assembly of the stiffness matrix ? System Load Viewer spikes to >99.8% RAM Memory before crashing.
  • For smaller deformations, it might happen that the deformation proceeds successfully.
  • Increasing the number of deformation increments (DEFORM_NONLINEAR_ITER) seems to affect the issue as well. This is most likely related to the previous observation.
I hope this can give a reasonable overview of the context. The problem appears both with 5.0.0 and 6.0.0. Having struggled with this for a moment now, I would like to know the following :
  • Are the reported memory requirements reasonable for the described case (over 32GB RAM for ~3M cell volume mesh deformation) ? If so, my machine is simply not sufficient ... although 32GB seems excessive for the calculation at hand.
  • What settings in the Grid Deformation Parameters section of the .cfg could allow me to reduce these memory requirements ? Perhaps my setting of the solver (FGMRES), preconditioner (ILU) or the remaining parameters are not well-suited, although I suspect it is irrelevant if the problem is a large stiffness matrix.
  • Bonus: Why does the amplitude of the deformation (seem to) affect the memory requirements of the volume mesh movement routine ? Is there perhaps some kind of "radius" outside of which elements are not included in the calculation, which depends on the boundary deformation ?
If anyone has experience with a similar process including the volume mesh deformation, any comment would be very welcome and helpful.


pcg March 24, 2020 15:34

Hi Peter, no, SU2 does not run out-of-core adding to Heathers' suggestions you can also try the CONJUGATE_GRADIENT linear solver.
Do note that a number of issues with 3D mesh deformation have been addressed for v7...


All times are GMT -4. The time now is 13:41.