|
[Sponsors] |
![]() |
![]() |
#1 |
Member
Join Date: Dec 2015
Posts: 74
Rep Power: 11 ![]() |
Hi,
I'm trying to run in parallel an anlysis with a MRF zone and mixing plane interface, I'm using Foam extend 4.1. However I'm not having advanteages in parallelize the analysis. I read that for cyclicGGI zones and Mixing Plane I need to use the patchConstrained Method. My decomposeParDict is (in the case of 4 processors): Code:
numberOfSubdomains 4; method patchConstrained; globalFaceZones ( RUCYCLIC1Zone RUINLETZone RUCYCLIC2Zone RUOUTLETZone GVOUTLETZone DTINLETZone DTCYCLIC1Zone DTCYCLIC2Zone ); patchConstrainedCoeffs { method metis; numberOfSubdomains 4; patchConstraints ( (RUINLET 0) (GVOUTLET 0) (RUOUTLET 1) (DTINLET 1) ); } metisCoeffs { processorWeights ( 1 1 1 1 ); } distributed no; // ************************************************************************* // I run the multi-processor analysis with the comand: Code:
mpirun -np 8 MRFSimpleFoam > log.MRFSimpleFoam_np08 Do you have some advise to speed up the analysis by decomposing the domain? Am I doing something wrong in the decomposeParDict setting? Thanks, WhiteW |
|
![]() |
![]() |
![]() |
![]() |
#2 | |
Senior Member
Santiago Lopez Castano
Join Date: Nov 2012
Posts: 354
Rep Power: 16 ![]() |
Quote:
|
||
![]() |
![]() |
![]() |
![]() |
#3 |
Member
Join Date: Dec 2015
Posts: 74
Rep Power: 11 ![]() |
Hi Santiago,
thanks for your reply. I did not used the -parallel option because it did not work well in my OF. using: Code:
mpirun -np 2 MRFSimpleFoam -parallel >log.MRFSimpleFoam Code:
Time = 1 BiCGStab: Solving for Ux, Initial residual = 1, Final residual = 0.00286063, No Iterations 1 BiCGStab: Solving for Uy, Initial residual = 1, Final residual = 0.00320029, No Iterations 1 BiCGStab: Solving for Uz, Initial residual = 1, Final residual = 0.00224373, No Iterations 1 BiCGStab: Solving for p, Initial residual = 1, Final residual = 0.0484594, No Iterations 25 BiCGStab: Solving for p, Initial residual = 0.377316, Final residual = 0.0180105, No Iterations 47 time step continuity errors : sum local = 0.2369, global = -0.0946146, cumulative = -0.0946146 Initializing the mixingPlane interpolator between master/shadow patches: GVOUTLET/RUINLET using: Code:
mpirun -np 3 MRFSimpleFoam -parallel >log.MRFSimpleFoam Code:
Create time Create mesh for time = 0 Initializing the GGI interpolator between master/shadow patches: RUCYCLIC1/RUCYCLIC2 Initializing the GGI interpolator between master/shadow patches: DTCYCLIC1/DTCYCLIC2 Initializing the mixingPlane interpolator between master/shadow patches: RUOUTLET/DTINLET -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode 1. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- Code:
mpirun -np 4 MRFSimpleFoam -parallel >log.MRFSimpleFoam What could be the cause of the other erros? Thanks, WhiteW |
|
![]() |
![]() |
![]() |
Thread Tools | Search this Thread |
Display Modes | |
|
|
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
[foam-extend.org] Installing Foam Extend 4.0 on Linux Mint | andysuth | OpenFOAM Installation | 1 | May 11, 2019 08:37 |
error with reactingFoam | BakedAlmonds | OpenFOAM Running, Solving & CFD | 4 | June 22, 2016 02:21 |
[OpenFOAM] Take derivative of mean velocity in paraFoam | hiuluom | ParaView | 13 | April 26, 2016 06:44 |
[blockMesh] BlockMesh FOAM warning | gaottino | OpenFOAM Meshing & Mesh Conversion | 7 | July 19, 2010 14:11 |
Problems of Duns Codes! | Martin J | Main CFD Forum | 8 | August 14, 2003 23:19 |