CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Programming & Development

fvc::grad(alpha) error in parallel run despite serries run is ok

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   November 11, 2015, 16:20
Default fvc::grad(alpha) error in parallel run despite serries run is ok
  #1
Member
 
Alireza Atrian
Join Date: May 2014
Posts: 39
Rep Power: 11
ali_atrian is on a distinguished road
Hello
The following code is a part of the code written for a turbulent lift model (zanke lift model) which is used for lagrangian particles:
Code:
template<class CloudType>
Foam::vector Foam::ZankeLiftForce<CloudType>::n
(
    const label pID
) const
{
    dimensionedScalar Vsmall ("Vsmall",dimensionSet(0,-1,0,0,0),1E-08);
    volVectorField npV =fvc::grad(alphac_); 
    return npV[pID];    
}
    vector np=n(p.cell());
    value.Su() = fD *np*(1.0/2.5*pow(rnd*UrmsByUstar,2));
it compiles well but My solver can use this lift force only in series and not in parallel (with mpirun). In parallel run I receive these errors (even if all the particles are in a single processor)
Code:
[ubuntu:9557] *** An error occurred in MPI_Recv
[ubuntu:9557] *** on communicator MPI_COMM_WORLD
[ubuntu:9557] *** MPI_ERR_TRUNCATE: message truncated
[ubuntu:9557] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
--------------------------------------------------------------------------
mpirun has exited due to process rank 1 with PID 9557 on
node ubuntu exiting improperly. There are two reasons this could occur:
1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.
2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"
This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
I examined the problem and found that arises from the line:
Code:
volVectorField npV =fvc::grad(alphac_);
(although I can have :
Code:
volVectorField npV = alphac_* vector::one;
with no problem.)
So the main problem is because of fvc::grad. I don’t know the reason. Can any one tell me why alphac_ can not be used with fvc::grad !

Last edited by ali_atrian; November 14, 2015 at 09:03.
ali_atrian is offline   Reply With Quote

Old   November 13, 2015, 03:56
Default vector np=n(p.cell());
  #2
Member
 
Alireza Atrian
Join Date: May 2014
Posts: 39
Rep Power: 11
ali_atrian is on a distinguished road
there is no answer to my problem yet ?!!
i investigated more :
the problem is due to this line in my code:
Code:
    vector np=n(p.cell());
this line should be executed as number as my particles (17000)
after passing the first particle this error appears:

Code:
[ubuntu:5427] *** An error occurred in MPI_Recv
[ubuntu:5427] *** on communicator MPI_COMM_WORLD
[ubuntu:5427] *** MPI_ERR_TRUNCATE: message truncated
[ubuntu:5427] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
but my solver can not proceed any more for the second particle and i will receive the error:

Code:
mpirun has exited due to process rank 1 with PID 5427 on
node ubuntu exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
no one has no suggestion ?!!!
i can attach the whole code of my added lift force if neccessary
ali_atrian is offline   Reply With Quote

Old   October 5, 2016, 03:34
Default
  #3
New Member
 
Werner
Join Date: Apr 2014
Posts: 19
Rep Power: 11
Polli is on a distinguished road
Hello,
i have exactly the same problem like you, do you sill found any solution for the mpi particle problem?
Polli is offline   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
problem during mpi in server: expected Scalar, found on line 0 the word 'nan' muth OpenFOAM Running, Solving & CFD 3 August 27, 2018 04:18
Case running in serial, but Parallel run gives error atmcfd OpenFOAM Running, Solving & CFD 18 March 26, 2016 12:40
Can not run OpenFOAM in parallel in clusters, help! ripperjack OpenFOAM Running, Solving & CFD 5 May 6, 2014 15:25
parallel Grief: BoundaryFields ok in single CPU but NOT in Parallel JR22 OpenFOAM Running, Solving & CFD 2 April 19, 2013 16:49
Unable to run OF in parallel on a multiple-node cluster quartzian OpenFOAM 3 November 24, 2009 13:37


All times are GMT -4. The time now is 17:36.