CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Programming & Development (https://www.cfd-online.com/Forums/openfoam-programming-development/)
-   -   Parallelising an explicit solver: is correctBoundaryConditions enough? (https://www.cfd-online.com/Forums/openfoam-programming-development/235770-parallelising-explicit-solver-correctboundaryconditions-enough.html)

scleakey April 28, 2021 03:49

Parallelising an explicit solver: is correctBoundaryConditions enough?
 
Hello everyone :D

I am programming a second-order Godunov-type scheme in OpenFOAM using dual-time stepping. It works fine in serial, but has problems in parallel... sometimes. In parallel, sometimes it converges in pseudo-time, but sometimes (if there are rapid changes near a processor boundary) it can get stuck in an extreme feedback loop at the processor boundary that causes it to blow up! :confused: This makes me think the problem is in the synchronisation between different processors.

How the scheme works
There are conserved variables Q and primitive variables W. In each pseudo-time iteration, (limited) gradients of W are calculated, then these are used to reconstruct the Riemann states W_L and W_R at each interface, which are used to calculate the conservative flux F(Q) with a Riemann solver. The flux F(Q) is then used to update the conserved variables Q, which are then used to update the primitive variables W.

correctBoundaryConditions
I use correctBoundaryConditions:
  • on gradients of W right after calculation from W
  • on W right after calculation from Q
as these are the only things that need to move between processors. (I have also tried using correctBoundaryConditions on anything it can be used on, at all different points of the code, to no avail.) I am starting to think correctBoundaryConditions is not the problem and there is something else I'm missing. Note that I have used gMin/gMax when required and I am using OpenFOAM v1906.

MPI thoughts
I have trawled through dbnsFoam on foam-extend, these forums, and also the general internet, and the best I can get is from this presentation: Instructional workshop on OpenFOAM programming. It seems to suggest that each runTime loop sends out information in a non-blocking way, and correctBoundaryConditions is like a blocking synchronisation point where this information is received. If this is true, then perhaps I need another sending out of information after the gradients are calculated? (Sorry if I used the wrong words - I don't know that much about MPI at the moment.)

Would anybody be able to shed light on blocking and non-blocking calls in OpenFOAM? Should correctBoundaryConditions solve all my problems or do I need something else?

Thanks in advance,
Shannon :)

dlahaye April 29, 2021 12:35

OpenFoam inherits blocking and non-blocking calls from MPI.

For correctBoundaryConditions to take of inter-processor boundaries, a naive approach might not suffice.

Possibly it helps for you to share more information on what you are trying to accomplish.

scleakey May 3, 2021 11:27

Left/right/owner/neighbour
 
Thanks Domenico for your reply :)

I read up on MPI and went through the code for correctBoundaryConditions - I am now pretty sure correctBoundaryConditions is not the problem. I have also verified this with some classic Info/Pout outputs in the solver.

I think the problem was that I defined the left/right Riemann states with respect to owner/neighbour cells. The foam-extend solver dbnsFoam seems to do this in src/dbns/numericFlux/numericFlux.C and I was taking it as inspiration. However, I don't think this makes sense as the owner might not always be on the left with respect to the velocity direction, for example, at a processor boundary. Perhaps dbnsFoam has corrected for this in the Riemann solver, which is how they get away with it. I need to do some thinking about this!


All times are GMT -4. The time now is 12:10.