CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Running, Solving & CFD (https://www.cfd-online.com/Forums/openfoam-solving/)
-   -   running in parallel error (https://www.cfd-online.com/Forums/openfoam-solving/107466-running-parallel-error.html)

saeid.oqaz September 27, 2012 15:47

running in parallel error
 
hi foamers.
i use pimpleDyMFoam and RBF Function. when running in parallel following error appear.

Code:

Create time

Create dynamic mesh for time = 0

Selecting dynamicFvMesh dynamicMotionSolverFvMesh
Selecting motion solver: RBFMotionSolver
Radial Basis Function interpolation: Selecting RBF function: IMQB
Total points on moving boundaries: 0
Total points on static boundaries: 3925
Selected 0 control points on moving boundaries
Number of internal points: 34922
Reading field p

Reading field U

Reading/calculating face flux field phi

Selecting incompressible transport model Newtonian
Selecting turbulence model type laminar
Reading field rAU if present


Starting time loop

Courant Number mean: 0 max: 0 velocity magnitude: 0
deltaT = 0.012
Time = 0.012

Inverting RBF motion matrix
[0]
[0]
[0] --> FOAM FATAL ERROR:
[0] Singular matrix
[0]
[0]    From function scalarSquareMatrix::LUdecompose(scalarSquareMatrix& matrix, labelList& rowIndices)
[0]    in file matrices/scalarMatrices/scalarSquareMatrix.C at line 94.
[0]
FOAM parallel run exiting
[0]
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[2]
[2]
[2] --> FOAM FATAL ERROR:
[2] Singular matrix
[2]
[2]    From function scalarSquareMatrix::LUdecompose(scalarSquareMatrix& matrix, labelList& rowIndices)
[2]    in file matrices/scalarMatrices/scalarSquareMatrix.C at line 94.
[2]
FOAM parallel run exiting
[2]
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 15707 on
node saeid-Inspiron-N4010 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).

decomposeParDict :

Code:

numberOfSubdomains 3;

method          hierarchical;

simpleCoeffs
{
    n              ( 3 1 1 );
    delta          0.001;
}

hierarchicalCoeffs
{
    n              ( 3 1 1 );
    delta          0.001;
    order          xyz;
}

metisCoeffs
{
    processorWeights ( 1 1 1 1 );
}

manualCoeffs
{
    dataFile        "";
}

distributed    no;

roots          ( );


thanks for helping me.

owayz September 27, 2012 19:52

Your velocity is 0. May be this is causing the problem?
Or may there is some problem with solver you are using in the fvSolution dictionary.

saeid.oqaz September 28, 2012 12:56

thanks Awais .

i change solver in fvsolution dictionary and work fine. thanks again.

sunshuai July 8, 2013 23:33

Quote:

Originally Posted by saeid.oqaz (Post 384095)
thanks Awais .

i change solver in fvsolution dictionary and work fine. thanks again.

I have a same problem.I want to know how to change solver in fvsolution dictionary? Thanks

didamiamia July 25, 2013 06:52

Quote:

Originally Posted by sunshuai (Post 438574)
I have a same problem.I want to know how to change solver in fvsolution dictionary? Thanks

hi,

have you found what to change in fvSolution to resolve the probleme of Singular matrix?

thanks for your answer.

sunshuai July 25, 2013 12:28

Quote:

Originally Posted by didamiamia (Post 441910)
hi,

have you found what to change in fvSolution to resolve the probleme of Singular matrix?

thanks for your answer.

sorry,I can't solve this problems ,and this puzzled me.

Collin June 5, 2017 11:29

For anyone who happens to end up here seeking answers:

The singular matrix issue is caused by there not being any control points within the processor domain. I.e. if you have a decomposed grid then one (or more) of your processor domains do not interact directly with the moving patch. This can be resolved by changing decomposition schemes or modifying domain sizes (however, there are a great many instances where such things are not readily feasible).


All times are GMT -4. The time now is 03:21.