|
[Sponsors] |
fvc::grad(alpha) error in parallel run despite serries run is ok |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
November 11, 2015, 16:20 |
fvc::grad(alpha) error in parallel run despite serries run is ok
|
#1 |
Member
Alireza Atrian
Join Date: May 2014
Posts: 39
Rep Power: 11 |
Hello
The following code is a part of the code written for a turbulent lift model (zanke lift model) which is used for lagrangian particles: Code:
template<class CloudType> Foam::vector Foam::ZankeLiftForce<CloudType>::n ( const label pID ) const { dimensionedScalar Vsmall ("Vsmall",dimensionSet(0,-1,0,0,0),1E-08); volVectorField npV =fvc::grad(alphac_); return npV[pID]; } vector np=n(p.cell()); value.Su() = fD *np*(1.0/2.5*pow(rnd*UrmsByUstar,2)); Code:
[ubuntu:9557] *** An error occurred in MPI_Recv [ubuntu:9557] *** on communicator MPI_COMM_WORLD [ubuntu:9557] *** MPI_ERR_TRUNCATE: message truncated [ubuntu:9557] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort -------------------------------------------------------------------------- mpirun has exited due to process rank 1 with PID 9557 on node ubuntu exiting improperly. There are two reasons this could occur: 1. this process did not call "init" before exiting, but others in the job did. This can cause a job to hang indefinitely while it waits for all processes to call "init". By rule, if one process calls "init", then ALL processes must call "init" prior to termination. 2. this process called "init", but exited without calling "finalize". By rule, all processes that call "init" MUST call "finalize" prior to exiting or it will be considered an "abnormal termination" This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). Code:
volVectorField npV =fvc::grad(alphac_); Code:
volVectorField npV = alphac_* vector::one; So the main problem is because of fvc::grad. I don’t know the reason. Can any one tell me why alphac_ can not be used with fvc::grad ! Last edited by ali_atrian; November 14, 2015 at 09:03. |
|
November 13, 2015, 03:56 |
vector np=n(p.cell());
|
#2 |
Member
Alireza Atrian
Join Date: May 2014
Posts: 39
Rep Power: 11 |
there is no answer to my problem yet ?!!
i investigated more : the problem is due to this line in my code: Code:
vector np=n(p.cell()); after passing the first particle this error appears: Code:
[ubuntu:5427] *** An error occurred in MPI_Recv [ubuntu:5427] *** on communicator MPI_COMM_WORLD [ubuntu:5427] *** MPI_ERR_TRUNCATE: message truncated [ubuntu:5427] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort Code:
mpirun has exited due to process rank 1 with PID 5427 on node ubuntu exiting improperly. There are two reasons this could occur: 1. this process did not call "init" before exiting, but others in the job did. This can cause a job to hang indefinitely while it waits for all processes to call "init". By rule, if one process calls "init", then ALL processes must call "init" prior to termination. 2. this process called "init", but exited without calling "finalize". By rule, all processes that call "init" MUST call "finalize" prior to exiting or it will be considered an "abnormal termination" This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). i can attach the whole code of my added lift force if neccessary |
|
October 5, 2016, 03:34 |
|
#3 |
New Member
Werner
Join Date: Apr 2014
Posts: 19
Rep Power: 11 |
Hello,
i have exactly the same problem like you, do you sill found any solution for the mpi particle problem? |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
problem during mpi in server: expected Scalar, found on line 0 the word 'nan' | muth | OpenFOAM Running, Solving & CFD | 3 | August 27, 2018 04:18 |
Case running in serial, but Parallel run gives error | atmcfd | OpenFOAM Running, Solving & CFD | 18 | March 26, 2016 12:40 |
Can not run OpenFOAM in parallel in clusters, help! | ripperjack | OpenFOAM Running, Solving & CFD | 5 | May 6, 2014 15:25 |
parallel Grief: BoundaryFields ok in single CPU but NOT in Parallel | JR22 | OpenFOAM Running, Solving & CFD | 2 | April 19, 2013 16:49 |
Unable to run OF in parallel on a multiple-node cluster | quartzian | OpenFOAM | 3 | November 24, 2009 13:37 |