CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   Main CFD Forum (https://www.cfd-online.com/Forums/main/)
-   -   Parallel FVM on staggered grid (https://www.cfd-online.com/Forums/main/69784-parallel-fvm-staggered-grid.html)

ertan November 4, 2009 16:48

Parallel FVM on staggered grid
 
HI, I'm working on parallelization of a 3D FVM code running on structured and (backward) staggered grid. I use domain decomposition and MPI for parallelization and fractional step for pressure. I use SIP (stones implicit method) as the solver for all variables. Data at the boundaries are exchanged via ghost cells. I exchange data at the end of each time step. The code runs well and spits out results, however, they are not the same as results from the serial code. There is a discrepancy and the pressure at the domain boundaries doesn't look smooth.

I wonder if I have to do the information exchange at every inner solver iterations? Right now, I work on that, however, I have also have some doubts about the whole picture. I guess , people prefer Multigrid method for solving pressure. I am not sure, if multigrid method is a must for parallelization. I'd like to hear some ideas from experienced people . Any resource is also welcome.

Thanks

Ertan

quarkz November 5, 2009 09:06

Hi,

is it a must to use SIP? i suggest u use PETSc or HYPRE to solve the momentum and poisson eqn respectively. of cos, u'll need to take time to learn it. but it should be quite fast.

u'll 've to update the values at each time step.

ertan November 5, 2009 11:25

Thanks for the reply,

The code I've been working on can also use ICCG (Incomplete Cholesky preconditioned Conjugate Gradient method), Gauss-Seidel, CGSTAB, ADI. The code looks pretty much Peric's codes which are available on his website. In one of his parallel codes (with PVM, I don't think this matters tough), he seems to exchange data between domains at each inner iteration. I believe this brings in extra computational burden. In my masters, I worked on a finite difference code parallelized with PVM and the boundary transfers were carried out at each time step. I don't understand why it is not working now.

One thing that come to my mind is that the grid I use is staggered, and I data transfer algorithm might be wrong. Assuming that I decompose the domain in x-direction, at the end of each subdomains, I only have 1 ghost cell for u-momentum cells, whereas there are 3 in the beginning of the subdomains. So I transfer 3 u-values from a given subdomain to the next one, and receive only 1 u-value in return. For other variables (u,w,and P), since the cells are not staggered in x-direction, I do 2 by 2 transfer (2 vaues of ,v,w and P are sent and received). Does this algorithm make sense to you? or do you have any recommendation.

Thanks,

Ertan


All times are GMT -4. The time now is 13:24.