CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > OpenFOAM Running, Solving & CFD

Error running in parallel

Register Blogs Members List Search Today's Posts Mark Forums Read

Reply
 
LinkBack Thread Tools Display Modes
Old   July 22, 2016, 04:39
Default Error running in parallel
  #1
New Member
 
K.H
Join Date: Feb 2016
Posts: 16
Rep Power: 2
urion is on a distinguished road
Hi foamers
I am using dns for simulation of turbulent channel flow with blockstructured meshes. When i run the case in parallel with only 4 processors everything is working fine. But as soon as I increase the number of processors using decomposepar i get the error massage:



/*---------------------------------------------------------------------------*\
| ========= | |
| \\ / F ield | foam-extend: Open Source CFD |
| \\ / O peration | Version: 3.2 |
| \\ / A nd | Web: http://www.foam-extend.org |
| \\/ M anipulation | For copyright notice see file Copyright |
\*---------------------------------------------------------------------------*/
Build : 3.2-334ba0562a2c
Exec : PFoam -case ./ -parallel
Date : Jul 22 2016
Time : 10:10:34
Host : knoten-13
PID : 7408
CtrlDict : "/home/studenten/stud-konhat/RZZN/Betrieb_Retau_180/Y+04_15/system/controlDict"
Case : /home/studenten/stud-konhat/RZZN/Betrieb_Retau_180/Y+04_15
nProcs : 15
Slaves :
14
(
knoten-13.7409
knoten-13.7410
knoten-13.7411
knoten-13.7412
knoten-13.7413
knoten-13.7414
knoten-13.7415
knoten-13.7416
knoten-13.7417
knoten-13.7418
knoten-13.7419
knoten-13.7420
knoten-13.7421
knoten-13.7422
)

Pstream initialized with:
nProcsSimpleSum : 16
commsType : blocking
SigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time

Create mesh for time = 0.002

[knoten-13:7419] *** An error occurred in MPI_Bsend
[knoten-13:7419] *** on communicator MPI_COMM_WORLD
[knoten-13:7419] *** MPI_ERR_BUFFER: invalid buffer pointer
[knoten-13:7419] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
--------------------------------------------------------------------------
mpirun has exited due to process rank 11 with PID 7419 on
node knoten-13 exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[knoten-13:07407] 2 more processes have sent help message help-mpi-errors.txt / mpi_errors_are_fatal
[knoten-13:07407] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages


Did anyone have the same trouble??
urion is offline   Reply With Quote

Old   July 24, 2016, 14:21
Default
  #2
Senior Member
 
Join Date: Sep 2010
Location: France
Posts: 224
Rep Power: 9
T.D. is on a distinguished road
Hi,

what happens if you change the method of the domain decomposition ? did you try ?

regards,

T.D.
T.D. is offline   Reply With Quote

Old   July 25, 2016, 05:54
Default
  #3
New Member
 
K.H
Join Date: Feb 2016
Posts: 16
Rep Power: 2
urion is on a distinguished road
Thanks for your reply. I tried to change the method of the domain decomposition but the same error still appears. But I managed to avoid the error massage:

[knoten-13:7419] *** An error occurred in MPI_Bsend
[knoten-13:7419] *** on communicator MPI_COMM_WORLD
[knoten-13:7419] *** MPI_ERR_BUFFER: invalid buffer pointer
[knoten-13:7419] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abor

by increasing the MPI buffer size.
urion is offline   Reply With Quote

Old   July 25, 2016, 06:42
Default
  #4
Senior Member
 
Derek Mitchell
Join Date: Mar 2014
Location: UK, Reading
Posts: 134
Rep Power: 5
derekm is on a distinguished road
Has this ever worked?
When did it stop working??
what changed between now and then?
Have you tried with a simple parallel tutorial case?
__________________
A CHEERING BAND OF FRIENDLY ELVES CARRY THE CONQUERING ADVENTURER OFF INTO THE SUNSET
derekm is offline   Reply With Quote

Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Explicitly filtered LES saeedi Main CFD Forum 16 October 14, 2015 11:58
simpleFoam parallel AndrewMortimer OpenFOAM Running, Solving & CFD 12 August 7, 2015 18:45
problem during mpi in server: expected Scalar, found on line 0 the word 'nan' muth OpenFOAM Running, Solving & CFD 2 April 10, 2015 05:42
simpleFoam in parallel issue plucas OpenFOAM Running, Solving & CFD 3 July 17, 2013 11:30
parallel Grief: BoundaryFields ok in single CPU but NOT in Parallel JR22 OpenFOAM Running, Solving & CFD 2 April 19, 2013 16:49


All times are GMT -4. The time now is 14:12.