chenghui62000 |
January 5, 2020 18:25 |
A function can only be used in serial calculation not parallel calculation
Hi everyone,
I have met a strange problem with my OpenFOAM solver when doing a parallel calculation. This problem only comes out when I run the solver parallelly.
My solver is developed based on the original pisoFoam. At each timestep, I using Nettings.updateVelocity(U, mesh) to get the fluid velocities at specific positions (the positions is changeable at each timestep). You can see the code in my main solver here:
Code:
while (runTime.loop())
{
Info << "Time = " << runTime.timeName() << nl << endl;
#include "CourantNo.H"
// Pressure-velocity PISO corrector
{
#include "UEqn.H"
// --- PISO loop
while (piso.correct())
{
#include "pEqn.H"
}
}
laminarTransport.correct();
turbulence->correct();
//>>>>>>>>>>>>>Below>>>>>>>
Nettings.updateVelocity(U,mesh);
//<<<<<<<<<<<<<Above<<<<<<
runTime.write();
Info << "ExecutionTime = " << runTime.elapsedCpuTime() << " s"
<< " ClockTime = " << runTime.elapsedClockTime() << " s"
<< nl << endl;
}
Info << "End\n"
<< endl;
return 0;
}
The source code of the function to get velocities in the above code is given below. This function is realized by looping through all the cells and return the velocity in the nearest cell.
Code:
void Foam::netPanel::updateVelocity(
const volVectorField &U,
const fvMesh &mesh)
{
const vectorField ¢res(mesh.C());
List<vector> fluidVelocities(structuralElements_memb.size(), vector::zero);
Info << "In updateVelocity, number of mesh is " << centres.size() << endl;
Info << "In updateVelocity, number of U is " << U.size() << endl;
scalar maxDistance(1);
forAll(EPcenter, Elemi)
{
maxDistance = 1;
vector nearestCell(vector::zero);
scalar loops(0);
forAll(centres, cellI) // loop through all the cells,
{
scalar k1(calcDist(centres[cellI], EPcenter[Elemi]));
if (k1 < maxDistance)
{
maxDistance = k1;
fluidVelocities[Elemi] = U[cellI];
nearestCell = centres[cellI];
loops += 1;
Info << "After " << loops << " times of loop, the nearest cell is " << nearestCell << "to point " << EPcenter << "\n"
<< endl;
}
}
}
fluidVelocity_memb = fluidVelocities; // only assige onece
Info << "the velocity on nodes are " << fluidVelocity_memb << endl;
}
When I run this solver with the same mesh, it can return the right velocities. But it can not find the velocity when I run it parallelly. See the log files bellow:
Code:
serial run
Starting time loop
Time = 0.01
Courant Number mean: 0.000565 max: 0.0904
smoothSolver: Solving for Ux, Initial residual = 1, Final residual = 2.4007e-06, No Iterations 1
smoothSolver: Solving for Uy, Initial residual = 0.891308, Final residual = 1.23902e-06, No Iterations 1
smoothSolver: Solving for Uz, Initial residual = 0.895257, Final residual = 1.31102e-06, No Iterations 1
GAMG: Solving for p, Initial residual = 1, Final residual = 8.40918e-07, No Iterations 35
time step continuity errors : sum local = 9.50237e-10, global = 1.27782e-10, cumulative = 1.27782e-10
smoothSolver: Solving for epsilon, Initial residual = 1, Final residual = 0.00445555, No Iterations 1
smoothSolver: Solving for k, Initial residual = 1, Final residual = 0.004493, No Iterations 1
In updateVelocity, number of mesh is 184320
In updateVelocity, number of U is 184320
After 1 times of loop, the nearest cell is (-0.49375 -0.21875 -0.39375)to point (0 0.05 -0.1)
... ...
After 45 times of loop, the nearest cell is (-0.00625 0.05625 -0.20625)to point (0 0.05 -0.2)
the velocity on nodes are 4((0.226059 -2.8946e-08 -3.59708e-08) (0.226059 3.97379e-08 -4.07855e-08) (0.226059 2.65165e-08 -2.22689e-08) (0.226059 -4.12251e-08 -1.8172e-08))
ExecutionTime = 2.31 s ClockTime = 2 s
Code:
MPIRUN
Starting time loop
Time = 0.01
Current Number means: 0.000565 max: 0.0904
smoothSolver: Solving for Ux, Initial residual = 1, Final residual = 2.4007e-06, No Iterations 1
smoothSolver: Solving for Uy, Initial residual = 0.892185, Final residual = 1.24606e-06, No Iterations 1
smoothSolver: Solving for Uz, Initial residual = 0.895991, Final residual = 1.31577e-06, No Iterations 1
GAMG: Solving for p, Initial residual = 1, Final residual = 9.93486e-07, No Iterations 34
time step continuity errors : sum local = 1.12264e-09, global = -1.18173e-10, cumulative = -1.18173e-10
smoothSolver: Solving for epsilon, Initial residual = 1, Final residual = 0.0107252, No Iterations 1
smoothSolver: Solving for k, Initial residual = 1, Final residual = 0.0106817, No Iterations 1
In updateVelocity, number of mesh is 23177
In updateVelocity, number of U is 23177
the velocity on nodes are 4{(0 0 0)}
ExecutionTime = 0.57 s ClockTime = 1 s
Other information:
In the parallel calculation, I used scotch method to decompose the calculation domain into 8 subdomains.
One should notice that the "updateVelocity" function can output the number of mesh at the beginning.
- In the parallel calculation, it prints "In updateVelocity, number of mesh is 23177". The number is the same as the number of cell in the processor0.
- However, in the serial calculation, it prints "In updateVelocity, number of mesh is 184320". The number is the same as the total number of cell in the whole calculation domain.
The question is that how can I run this solver parallelly? Because the position might sit in any subdomain, my "updateVelocity" can not return the correct velocity. Do I need to reconstruct all the subdomains at each timestep to fulfil this functionality?
Best regards,
|