CFD Online Discussion Forums

CFD Online Discussion Forums (http://www.cfd-online.com/Forums/)
-   OpenFOAM Programming & Development (http://www.cfd-online.com/Forums/openfoam-programming-development/)
-   -   multiphase solver - parallel processing - GAMG (http://www.cfd-online.com/Forums/openfoam-programming-development/93036-multiphase-solver-parallel-processing-gamg.html)

thibault_pringuey October 3, 2011 11:03

multiphase solver - parallel processing - GAMG
 
Hello all,

I have developed a method for parallel processing with a halo in OpenFOAM in order to have a high order scheme working properly in parallel. I have managed to make it work successfully on many test cases in parallel.

As I am interested in multiphase flows, I have developed a solver that transport the liquid volume fraction with my high order scheme. To do that I started with the interFoam routine and simply implemented my method for the advection of the liquid instead of the existing one, keeping the rest of the code as it is.

The code works perfectly well in serial but the GAMG solver crashes at the first iteration in parallel (with the halo), see error message below (I have also tried the PCG solver: it crashes after a couple of iterations). However, the code runs in parallel if, in the "pEqn.H" file, I comment out the resolution of the pdEqn equation, i.e. if the portion of code reads:
if (corr == nCorr-1 && nonOrth == nNonOrthCorr)
{
// pdEqn.solve(mesh.solver(pd.name() + "Final"));
}
else
{
// pdEqn.solve(mesh.solver(pd.name()));
}

instead of:
if (corr == nCorr-1 && nonOrth == nNonOrthCorr)
{
pdEqn.solve(mesh.solver(pd.name() + "Final"));
}
else
{
pdEqn.solve(mesh.solver(pd.name()));
}

Which is consistent with the error message.

In the code, I am updating the halo cells the following way: I am transferring the variables U and pd after the resolution of respectively the UEqn and the resolution of the pdEqn. I am also updating the boundary conditions with U.correctBoundaryConditions(); and pd.correctBoundaryConditions().

I have set the artificial boundary condition of the halo cells the following way: I am using a wall with zeroGradient boundary condition. I have also tried a fixedValue and inletOutlet boundary conditions.

As it does not work, it could mean that I am not updating the halo cells at the right time with the right data and/or that the artificial boundary of the halo cells is not appropriately set.

Does anyone knows if I am on the right track?

Also would I would like to know which dataset are transfered
between the processors and when for the interFoam solver?

Many thanks in advance for your help.

With best wishes,


Thibault


1] #0 Foam::error::printStack(Foam::Ostream&) in "/home/tp299/OpenFOAM/OpenFOAM-1.5.x/lib/linux64Gcc42DPOpt/libOpenFOAM.so"
[1] #1 Foam::sigFpe::sigFpeHandler(int) in "/home/tp299/OpenFOAM/OpenFOAM-1.5.x/lib/linux64Gcc42DPOpt/libOpenFOAM.so"
[1] #2 ?? in "/lib64/libc.so.6"
[1] #3 Foam::GAMGSolver::scalingFactor(Foam::Field<double >&, Foam::Field<double> const&, Foam::Field<double> const&, Foam::Field<double> const&) const in "/home/tp299/OpenFOAM/OpenFOAM-1.5.x/lib/linux64Gcc42DPOpt/libOpenFOAM.so"
[1] #4 Foam::GAMGSolver::scalingFactor(Foam::Field<double >&, Foam::lduMatrix const&, Foam::Field<double>&, Foam::FieldField<Foam::Field, double> const&, Foam::UPtrList<Foam::lduInterfaceField const> const&, Foam::Field<double> const&, unsigned char) const in "/home/tp299/OpenFOAM/OpenFOAM-1.5.x/lib/linux64Gcc42DPOpt/libOpenFOAM.so"
[1] #5 Foam::GAMGSolver::Vcycle(Foam::PtrList<Foam::lduMa trix::smoother> const&, Foam::Field<double>&, Foam::Field<double> const&, Foam::Field<double>&, Foam::Field<double>&, Foam::Field<double>&, Foam::PtrList<Foam::Field<double> >&, Foam::PtrList<Foam::Field<double> >&, unsigned char) const in "/home/tp299/OpenFOAM/OpenFOAM-1.5.x/lib/linux64Gcc42DPOpt/libOpenFOAM.so"
[1] #6 Foam::GAMGSolver::solve(Foam::Field<double>&, Foam::Field<double> const&, unsigned char) const in "/home/tp299/OpenFOAM/OpenFOAM-1.5.x/lib/linux64Gcc42DPOpt/libOpenFOAM.so"
[1] #7 Foam::fvMatrix<double>::solve(Foam::Istream&) in "/home/tp299/OpenFOAM/OpenFOAM-1.5.x/lib/linux64Gcc42DPOpt/libfiniteVolume.so"
[1] #8 main in "/home/tp299/OpenFOAM/tp299-1.5.x/applications/bin/linux64Gcc42DPOpt/TCLSFoam"
[1] #9 __libc_start_main in "/lib64/libc.so.6"
[1] #10 Foam::regIOobject::readIfModified() in "/home/tp299/OpenFOAM/tp299-1.5.x/applications/bin/linux64Gcc42DPOpt/TCLSFoam"

thibault_pringuey November 2, 2011 14:50

I have managed to code a solution for the above issue. I am solving explicitly the transport of the liquid phase with a high order scheme necessitating halos to work properly. The rest of the calculation is similar to interFoam and involves using the GAMG solver.

The idea is to calculate and write the geometry dependent variables, at pre-processing, on the extended sub-domains: 0-halo sub-domains + halos. Then, I calculate the evolution of the flowfield at runtime on 0-halo decomposed domains. Therefore, this solution involves two separate calculations on two different sets of sub-domains: one pre-processing calculation on a n-halo sub-domain and a runtime calculation on a 0-halo sub-domain.

In order to use properly the high order scheme:
1. I rewrite the geometry dependent variables for the new 0-halo sub domains using the variables calculated at pre-processing on the n-halo sub-domains.
2. At runtime: I use a list of scalar (size = number of cells in the n-halo sub domains) to represent the scalar field of the liquid fraction on a "virtual" extended domain. Because the runtime calculation is done on 0-halo sub-domain this "virtual field" needs to be reconstructed at each time step (sub-time step is your are using RK) through processor data exchange.

Although it seems cumbersome, it is not actually that bad: the sequence of pre-processing steps can be scripted and the runtime calculation are fast as the code is performing the minimum amount of operations required to run properly.

ehsan August 27, 2013 22:03

interPhaseChangeFoam in parallel
 
Hello

We are running standard interPhaseChangeFoam in parallel using 24 nodes. The run starts fine and go for some times but afterwards, one of our systems does not contribute in solving process and the run stops. Could you please help us in this regards?

How could we change the solver to run the case parallel?

Sincerely,
Ehsan


All times are GMT -4. The time now is 12:05.