CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   Main CFD Forum (https://www.cfd-online.com/Forums/main/)
-   -   steps required to transform single block structured FV code to multi-block (https://www.cfd-online.com/Forums/main/127492-steps-required-transform-single-block-structured-fv-code-multi-block.html)

antoine_b December 12, 2013 10:00

steps required to transform single block structured FV code to multi-block
 
Hi,

Can anyone explain me the different steps required to transform/extend a finite volume single block structured code to a finite volume multi-block structured code?

cfdnewbie December 12, 2013 10:25

Are you interested in conforming or non-conforming blocking?

antoine_b December 12, 2013 11:01

steps required to transform single block structured FV code to multi-block
 
Hi,

I am not sure to get completely your question but if I design a mesh, it will conform/follow the boundary. Do you mean non-conforming in the sense of an immerse boundary for instance?

cfdnewbie December 12, 2013 11:31

No, what I meant was: Do the blocks have to be connected in a conforming way? No hanging nodes?

antoine_b December 12, 2013 11:59

Ah ok. I want to do conforming at first (no hanging nodes)

sbaffini December 13, 2013 04:11

I have no practical experience, hence you might be beyond the point i'm going to describe (in this case, sorry for the stupid post), but this is more or less what you should do/consider:

1) A first big difference is due to the basics of the main solver. Is it parallel or serial? I assume it is serial, which possibly simplify the description. So the point is how to go from single-block (SB) serial to multi-block (MB) serial. I'll come back to the parallel case later (for parallel i mean MPI; shared memory is just like serial).

2) The very nice thing about this problem is that the SB solver is almost all you need for the MB case (i assume you have some tool to produce your MB grids). Indeed, what in the SB are the boundary conditions, in the MB become interface conditions from the adjacent blocks. Roughly speaking, you do it by adding ghost cells to the interface boundaries of your blocks and exchanging information from the adjacent blocks during iterations (I'll come back later on this).

3) So, i would summarize the first step like this: you create your different blocks; on every block a SB solver is running; on the interface boundaries between blocks you use as boundary conditions those grabbed from adjacent blocks and temporarily stored in ghost cells. However, you should possibly use these values just like if the cells of interest (those near the interface boundary) were interior cells and use the computational stencil for interior cells. On real boundaries, of course, you keep using your real boundary conditions.

4) Ghost cells needs to be the exact replica of those from the adjacent blocks and you need as many layers as required from the interior computational stencil.

5) Now, the main thing now becomes how you visit the different blocks. In serial (or shared memory), you should probably have a main cycle iterating over the several blocks, then solving within each block. For a fully explicit scheme this is no specificly problematic and you possibly have just to consider how to treat hyperbolicity in the order you visit the blocks (i'm really ignorant here, i'm just guessing). For implicit schemes and general elliptic/parabolic problems there is (if i remember correctly) a whole new mathematical problem to consider which goes under the name Domain Decomposition Techniques (Schwarz preconditioning in the specific case); again, i'm very ignorant here, but you can read Chapter 6 from

Canuto, Hussaini, Quarteroni, Zang: Spectral Methods. Evolution to Complex Geometries and Applications to Fluid Dynamics, Springer

for more information.

Basically, as i understand the matter, as you now have to solve an algebraic system, you will need to iterate multiple times among the blocks, during the solution at each time step, in order to properly exchange the information at the interfaces during the inner iterations on the algebraic systems on the single blocks. How to alternate between iterations among blocks and inner iterations, for each time step, is the main matter here and i don't have experience on this.

6) In serial (or shared memory parallel) you don't really need ghost cells. You just need some mask, linked list (or whatever) that tells you how near block interface cells are connected. However, the ghost cell method is useful because than you can easily move to the MPI parallel case, at least for a first naive implementation. In this case, your blocks wouldn't be anymore on the same machine, but you would distribute the blocks (Whole blocks, one at least) among the processors. Everything would work mostly in the same way, the main difference being that the blocks would be running in parallel and the grabbing of interface conditions should consider two main things:

- which processor has the near interface cells i need
- using an MPI command to exchange the information


I'm sure there are more efficient/intelligent way to parallelize everything, but this is certainly easier. If the original code is already parallel, honestly, i don't know of any simple way to do the job and, actually, parallelization is always the last step in code development. Here the main difficulty would be that anything should be inverted and all the processors would have the same equivalent part from all the blocks (1/Np fraction of cells from each block, with Np processors) and each serial solver should actually work on Nb, separate, sub-blocks, Nb being the total number of blocks.

This is, more or less, the textbook part of the job. I'm pretty sure other people can give more accurate and useful information on the matter.

hilllike December 13, 2013 05:07

I did that in my code.

The only difference is how to store the mesh.

store the entire domain as one grid system so that you do not need to treat the interface (ghost cell is not needed) and discretize all the control equations over the entire computational domain.

antoine_b December 13, 2013 12:03

Ok thanks to both. I can see better what I need to do now.


All times are GMT -4. The time now is 17:16.