CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Programming & Development

Parallelising an explicit solver: is correctBoundaryConditions enough?

Register Blogs Members List Search Today's Posts Mark Forums Read

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   April 28, 2021, 04:49
Question Parallelising an explicit solver: is correctBoundaryConditions enough?
  #1
New Member
 
Shannon Leakey
Join Date: Mar 2019
Posts: 10
Rep Power: 7
scleakey is on a distinguished road
Hello everyone

I am programming a second-order Godunov-type scheme in OpenFOAM using dual-time stepping. It works fine in serial, but has problems in parallel... sometimes. In parallel, sometimes it converges in pseudo-time, but sometimes (if there are rapid changes near a processor boundary) it can get stuck in an extreme feedback loop at the processor boundary that causes it to blow up! This makes me think the problem is in the synchronisation between different processors.

How the scheme works
There are conserved variables Q and primitive variables W. In each pseudo-time iteration, (limited) gradients of W are calculated, then these are used to reconstruct the Riemann states W_L and W_R at each interface, which are used to calculate the conservative flux F(Q) with a Riemann solver. The flux F(Q) is then used to update the conserved variables Q, which are then used to update the primitive variables W.

correctBoundaryConditions
I use correctBoundaryConditions:
  • on gradients of W right after calculation from W
  • on W right after calculation from Q
as these are the only things that need to move between processors. (I have also tried using correctBoundaryConditions on anything it can be used on, at all different points of the code, to no avail.) I am starting to think correctBoundaryConditions is not the problem and there is something else I'm missing. Note that I have used gMin/gMax when required and I am using OpenFOAM v1906.

MPI thoughts
I have trawled through dbnsFoam on foam-extend, these forums, and also the general internet, and the best I can get is from this presentation: Instructional workshop on OpenFOAM programming. It seems to suggest that each runTime loop sends out information in a non-blocking way, and correctBoundaryConditions is like a blocking synchronisation point where this information is received. If this is true, then perhaps I need another sending out of information after the gradients are calculated? (Sorry if I used the wrong words - I don't know that much about MPI at the moment.)

Would anybody be able to shed light on blocking and non-blocking calls in OpenFOAM? Should correctBoundaryConditions solve all my problems or do I need something else?

Thanks in advance,
Shannon
scleakey is offline   Reply With Quote

Old   April 29, 2021, 13:35
Default
  #2
Senior Member
 
Domenico Lahaye
Join Date: Dec 2013
Posts: 720
Blog Entries: 1
Rep Power: 17
dlahaye is on a distinguished road
OpenFoam inherits blocking and non-blocking calls from MPI.

For correctBoundaryConditions to take of inter-processor boundaries, a naive approach might not suffice.

Possibly it helps for you to share more information on what you are trying to accomplish.
dlahaye is offline   Reply With Quote

Old   May 3, 2021, 12:27
Red face Left/right/owner/neighbour
  #3
New Member
 
Shannon Leakey
Join Date: Mar 2019
Posts: 10
Rep Power: 7
scleakey is on a distinguished road
Thanks Domenico for your reply

I read up on MPI and went through the code for correctBoundaryConditions - I am now pretty sure correctBoundaryConditions is not the problem. I have also verified this with some classic Info/Pout outputs in the solver.

I think the problem was that I defined the left/right Riemann states with respect to owner/neighbour cells. The foam-extend solver dbnsFoam seems to do this in src/dbns/numericFlux/numericFlux.C and I was taking it as inspiration. However, I don't think this makes sense as the owner might not always be on the left with respect to the velocity direction, for example, at a processor boundary. Perhaps dbnsFoam has corrected for this in the Riemann solver, which is how they get away with it. I need to do some thinking about this!
scleakey is offline   Reply With Quote

Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Not able to get convergence using Explicit spatial and transient solver for shock tub Patelp1996 FLUENT 0 March 8, 2021 18:02
Conjugate heat transfer virothi Main CFD Forum 10 January 5, 2021 05:46
EHD model very low deltaT mcfdma OpenFOAM Running, Solving & CFD 1 November 3, 2020 06:46
Working directory via command line Luiz CFX 4 March 6, 2011 21:02
why the solver reject it? Anyone with experience? bearcat CFX 6 April 28, 2008 15:08


All times are GMT -4. The time now is 05:43.