
[Sponsors] 
steps required to transform single block structured FV code to multiblock 

LinkBack  Thread Tools  Display Modes 
December 12, 2013, 11:00 
steps required to transform single block structured FV code to multiblock

#1 
New Member
Join Date: Dec 2013
Posts: 4
Rep Power: 5 
Hi,
Can anyone explain me the different steps required to transform/extend a finite volume single block structured code to a finite volume multiblock structured code? 

December 12, 2013, 11:25 

#2 
Senior Member
cfdnewbie
Join Date: Mar 2010
Posts: 557
Rep Power: 13 
Are you interested in conforming or nonconforming blocking?


December 12, 2013, 12:01 
steps required to transform single block structured FV code to multiblock

#3 
New Member
Join Date: Dec 2013
Posts: 4
Rep Power: 5 
Hi,
I am not sure to get completely your question but if I design a mesh, it will conform/follow the boundary. Do you mean nonconforming in the sense of an immerse boundary for instance? 

December 12, 2013, 12:31 

#4 
Senior Member
cfdnewbie
Join Date: Mar 2010
Posts: 557
Rep Power: 13 
No, what I meant was: Do the blocks have to be connected in a conforming way? No hanging nodes?


December 12, 2013, 12:59 

#5 
New Member
Join Date: Dec 2013
Posts: 4
Rep Power: 5 
Ah ok. I want to do conforming at first (no hanging nodes)


December 13, 2013, 05:11 

#6 
Senior Member

I have no practical experience, hence you might be beyond the point i'm going to describe (in this case, sorry for the stupid post), but this is more or less what you should do/consider:
1) A first big difference is due to the basics of the main solver. Is it parallel or serial? I assume it is serial, which possibly simplify the description. So the point is how to go from singleblock (SB) serial to multiblock (MB) serial. I'll come back to the parallel case later (for parallel i mean MPI; shared memory is just like serial). 2) The very nice thing about this problem is that the SB solver is almost all you need for the MB case (i assume you have some tool to produce your MB grids). Indeed, what in the SB are the boundary conditions, in the MB become interface conditions from the adjacent blocks. Roughly speaking, you do it by adding ghost cells to the interface boundaries of your blocks and exchanging information from the adjacent blocks during iterations (I'll come back later on this). 3) So, i would summarize the first step like this: you create your different blocks; on every block a SB solver is running; on the interface boundaries between blocks you use as boundary conditions those grabbed from adjacent blocks and temporarily stored in ghost cells. However, you should possibly use these values just like if the cells of interest (those near the interface boundary) were interior cells and use the computational stencil for interior cells. On real boundaries, of course, you keep using your real boundary conditions. 4) Ghost cells needs to be the exact replica of those from the adjacent blocks and you need as many layers as required from the interior computational stencil. 5) Now, the main thing now becomes how you visit the different blocks. In serial (or shared memory), you should probably have a main cycle iterating over the several blocks, then solving within each block. For a fully explicit scheme this is no specificly problematic and you possibly have just to consider how to treat hyperbolicity in the order you visit the blocks (i'm really ignorant here, i'm just guessing). For implicit schemes and general elliptic/parabolic problems there is (if i remember correctly) a whole new mathematical problem to consider which goes under the name Domain Decomposition Techniques (Schwarz preconditioning in the specific case); again, i'm very ignorant here, but you can read Chapter 6 from Canuto, Hussaini, Quarteroni, Zang: Spectral Methods. Evolution to Complex Geometries and Applications to Fluid Dynamics, Springer for more information. Basically, as i understand the matter, as you now have to solve an algebraic system, you will need to iterate multiple times among the blocks, during the solution at each time step, in order to properly exchange the information at the interfaces during the inner iterations on the algebraic systems on the single blocks. How to alternate between iterations among blocks and inner iterations, for each time step, is the main matter here and i don't have experience on this. 6) In serial (or shared memory parallel) you don't really need ghost cells. You just need some mask, linked list (or whatever) that tells you how near block interface cells are connected. However, the ghost cell method is useful because than you can easily move to the MPI parallel case, at least for a first naive implementation. In this case, your blocks wouldn't be anymore on the same machine, but you would distribute the blocks (Whole blocks, one at least) among the processors. Everything would work mostly in the same way, the main difference being that the blocks would be running in parallel and the grabbing of interface conditions should consider two main things:  which processor has the near interface cells i need  using an MPI command to exchange the information I'm sure there are more efficient/intelligent way to parallelize everything, but this is certainly easier. If the original code is already parallel, honestly, i don't know of any simple way to do the job and, actually, parallelization is always the last step in code development. Here the main difficulty would be that anything should be inverted and all the processors would have the same equivalent part from all the blocks (1/Np fraction of cells from each block, with Np processors) and each serial solver should actually work on Nb, separate, subblocks, Nb being the total number of blocks. This is, more or less, the textbook part of the job. I'm pretty sure other people can give more accurate and useful information on the matter. 

December 13, 2013, 06:07 

#7 
Member
Ren/Xingyue
Join Date: Jan 2010
Location: Nagoya , Japan
Posts: 44
Rep Power: 9 
I did that in my code.
The only difference is how to store the mesh. store the entire domain as one grid system so that you do not need to treat the interface (ghost cell is not needed) and discretize all the control equations over the entire computational domain. 

December 13, 2013, 13:03 

#8 
New Member
Join Date: Dec 2013
Posts: 4
Rep Power: 5 
Ok thanks to both. I can see better what I need to do now.


Tags 
finitevolume, single/multi blocks, structured 
Thread Tools  
Display Modes  


Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
Block structured hexagonal mesh vs automated tetra mesh inside Workbench for CFD  Chander  CFX  3  November 27, 2011 17:24 
Parallel Processing of unsteady, structured single block cgrid  drrbradford  NUMECA  0  April 22, 2010 11:15 
Icem Mesh to Foam  jphandrigan  OpenFOAM Mesh Utilities  4  March 9, 2010 03:58 
Version 15 on Mac OS X  gschaider  OpenFOAM Installation  120  December 2, 2009 11:23 
Design Integration with CFD?  John C. Chien  Main CFD Forum  19  May 17, 2001 15:56 