CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Community Contributions > OpenFOAM CC Toolkits for Fluid-Structure Interaction

parallel run in foam extend

Register Blogs Members List Search Today's Posts Mark Forums Read

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   October 29, 2021, 12:45
Default parallel run in foam extend
  #1
Member
 
Richardpluff
Join Date: May 2014
Posts: 95
Rep Power: 11
CRI_CFD is on a distinguished road
Dear foamers,

I have been trying to use foam-extend for parallel meshing and execution of a elasticSolidFoam case...Neither snappy nor the solver are working in parallel when I run:

mpirun -10 64 elasticSolidFOam -parallel
or
mpirun -np 10 snappyHexMesh -parallel

Case is decomposed properly (or this it seems that it is well done) using hierarchical method:




Processor 0: field transfer
Processor 1: field transfer
Processor 2: field transfer
Processor 3: field transfer
Processor 4: field transfer
Processor 5: field transfer
Processor 6: field transfer
Processor 7: field transfer
Processor 8: field transfer
Processor 9: field transfer

End.


Then if I run:


mpirun -np 10 snappyHexMEsh -parallel


I got this error:


......

Removing mesh beyond surface intersections
------------------------------------------

Found point (14.0168 -11.5948 -13.0597) in cell -1 in global region 1 out of 16 regions.
Keeping all cells in region 1 containing point (14.0168 -11.5948 -13.0597)
Selected for keeping : 3282749 cells.
Edge intersection testing:
Number of edges : 11235884
Number of edges to retest : 1111636
Number of intersected edges : 1323711

Shell refinement iteration 0
----------------------------

Marked for refinement due to refinement shells : 0 cells.
Determined cells to refine in = 5.08 s
Selected for internal refinement : 9371 cells (out of 3282749)
[6] Number of cells in new mesh : 50732
[6] Number of faces in new mesh : 180559
[6] Number of points in new mesh: 79946
[8] Number of cells in new mesh : 61503
[8] Number of faces in new mesh : 219215
[8] Number of points in new mesh: 96625
[7] Number of cells in new mesh : 52876
[7] Number of faces in new mesh : 189245
[7] Number of points in new mesh: 84732
[5] Number of cells in new mesh : 58226
[5] Number of faces in new mesh : 204895
[5] Number of points in new mesh: 89917
[2] Number of cells in new mesh : 9937
[2] Number of faces in new mesh : 44155
[2] Number of points in new mesh: 26823
[4] Number of cells in new mesh : 69813
[4] Number of faces in new mesh : 245692
[4] Number of points in new mesh: 107080
[3] Number of cells in new mesh : 60893
[3] Number of faces in new mesh : 218427
[3] Number of points in new mesh: 97886
[1] Number of cells in new mesh : 66549
[1] Number of faces in new mesh : 241097
[1] Number of points in new mesh: 109868
[2] Number of cells in new mesh : 29712
[2] Number of faces in new mesh : 117257
[2] Number of points in new mesh: 60315
[1]
[1]
[1] --> FOAM FATAL ERROR:
[1] Problem. oldPointI:460933 newPointI:-1
[1]
[1] From function fvMeshDistribute::mergeSharedPoints()
[1] in file fvMeshDistribute/fvMeshDistribute.C at line 613.
[1]
FOAM parallel run aborting
[1]
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.


If I create the mesh without parallel run, then decomposePar and then mpirun -np 10 elasticSolidFoam -parallel I get:


Pstream initialized with:
nProcsSimpleSum : 0
commsType : nonBlocking
polling iterations : 0
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
allowSystemOperations : Disallowing user-supplied system call operations

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time

Create mesh for time = 0


Reading g
Reading field U

Patch x Traction boundary field: U
Patch y Traction boundary field: U
Selecting rheology model linearElastic
Creating constitutive model
Force-displacement for patch mandible will be written to forceDisp.dat
Selecting divSigmaExp calculation method surface

Starting time loop

Time = 1


Predicting U, gradU and snGradU based on V,gradV and snGradV

DICPCG: Solving for Ux, Initial residual = 1, Final residual = 0.0994173, No Iterations 177
DICPCG: Solving for Uy, Initial residual = 1, Final residual = 0.0988017, No Iterations 172
DICPCG: Solving for Uz, Initial residual = 1, Final residual = 0.097128, No Iterations 205
Time 1, Corrector 0, Solving for U using fvMatrix<Type>::solve, res = 1, rel res = 1, aitken = 0.1, inner iters = 0
DICPCG: Solving for Ux, Initial residual = 0.0725996, Final residual = 0.00720234, No Iterations 68
DICPCG: Solving for Uy, Initial residual = 0.101163, Final residual = 0.00998458, No Iterations 51
DICPCG: Solving for Uz, Initial residual = 0.0198838, Final residual = 0.00192552, No Iterations 28
DICPCG: Solving for Ux, Initial residual = 0.0454544, Final residual = 0.00444054, No Iterations 62
.....


DICPCG: Solving for Uz, Initial residual = 0.000234447, Final residual = 2.34371e-05, No Iterations 42

Time 1, Solving for U, Initial residual = 1, Final residual = 0.000979104, Relative residual = 0.00738358, No outer iterations 48
ExecutionTime = 68.36 s ClockTime = 69 s
Found patch mandible, writing y force and displacement to file
ExecutionTime = 68.41 s ClockTime = 69 s

End

[isma-Super-Server:10376] *** Process received signal ***
[isma-Super-Server:10376] Signal: Segmentation fault (11)
[isma-Super-Server:10376] Signal code: (-6)
[isma-Super-Server:10376] Failing at address: 0x3e800002888
[isma-Super-Server:10377] *** Process received signal ***
[isma-Super-Server:10377] Signal: Segmentation fault (11)
[isma-Super-Server:10377] Signal code: (-6)
[isma-Super-Server:10377] Failing at address: 0x3e800002889
[isma-Super-Server:10378] *** Process received signal ***
[isma-Super-Server:10378] Signal: Segmentation fault (11)
[isma-Super-Server:10378] Signal code: (-6)
[isma-Super-Server:10378] Failing at address: 0x3e80000288a
[isma-Super-Server:10379] *** Process received signal ***
[isma-Super-Server:10379] Signal: Segmentation fault (11)
[isma-Super-Server:10379] Signal code: (-6)
[isma-Super-Server:10379] Failing at address: 0x3e80000288b
[isma-Super-Server:10380] *** Process received signal ***
[isma-Super-Server:10380] Signal: Segmentation fault (11)
[isma-Super-Server:10380] Signal code: (-6)
[isma-Super-Server:10380] Failing at address: 0x3e80000288c
[isma-Super-Server:10381] *** Process received signal ***
[isma-Super-Server:10381] Signal: Segmentation fault (11)
[isma-Super-Server:10381] Signal code: (-6)
[isma-Super-Server:10381] Failing at address: 0x3e80000288d
[isma-Super-Server:10382] *** Process received signal ***
[isma-Super-Server:10382] Signal: Segmentation fault (11)
[isma-Super-Server:10382] Signal code: (-6)
[isma-Super-Server:10382] Failing at address: 0x3e80000288e
[isma-Super-Server:10383] *** Process received signal ***
[isma-Super-Server:10383] Signal: Segmentation fault (11)
[isma-Super-Server:10383] Signal code: (-6)
[isma-Super-Server:10383] Failing at address: 0x3e80000288f
[isma-Super-Server:10384] *** Process received signal ***
[isma-Super-Server:10384] Signal: Segmentation fault (11)
[isma-Super-Server:10384] Signal code: (-6)
[isma-Super-Server:10384] Failing at address: 0x3e800002890
[isma-Super-Server:10375] *** Process received signal ***
[isma-Super-Server:10375] Signal: Segmentation fault (11)
[isma-Super-Server:10375] Signal code: (-6)
[isma-Super-Server:10375] Failing at address: 0x3e800002887
--------------------------------------------------------------------------
mpirun noticed that process rank 4 with PID 10379 on node isma-Super-Server exited on signal 11 (Segmentation fault).

Why these faults? I am a very beginner to foam extend so any help will be really appreciated
CRI_CFD is offline   Reply With Quote

Old   November 2, 2021, 11:53
Default
  #2
Member
 
Richardpluff
Join Date: May 2014
Posts: 95
Rep Power: 11
CRI_CFD is on a distinguished road
BTW I realised that the meshing error also occurs in foam extend 4.1 release, at least in my installation. I am sure I am running the two different releases getting the same error message...

Ideas?
CRI_CFD is offline   Reply With Quote

Old   November 4, 2021, 11:59
Smile
  #3
Member
 
Richardpluff
Join Date: May 2014
Posts: 95
Rep Power: 11
CRI_CFD is on a distinguished road
It is worth to mention that using cfMesh (foam extend 4.1 Ubuntu 20.04) in parallel is failing as well:


Create time

Setting root cube size and refinement parameters
Root box (-32.8483 -62.2672 -70.6166) (90.0317 60.6128 52.2634)
Requested cell size corresponds to octree level 11
Requested boundary cell size corresponds to octree level 12
Refining boundary
Refining boundary boxes to the given size
Number of leaves per processor 1
Distributing leaves to processors
Finished distributing leaves to processors
Distributing load between processors
Finished distributing load between processors
Distributing load between processors
Finished distributing load between processors
Distributing load between processors
Finished distributing load between processors
Distributing load between processors
Finished distributing load between processors
Distributing load between processors
Finished distributing load between processors
Distributing load between processors
Finished distributing load between processors
Distributing load between processors
Finished distributing load between processors
Distributing load between processors
Finished distributing load between processors
Distributing load between processors
Finished distributing load between processors
Distributing load between processors
[isma-Super-Server:108623] *** An error occurred in MPI_Bsend
[isma-Super-Server:108623] *** reported by process [2400190465,4]
[isma-Super-Server:108623] *** on communicator MPI_COMM_WORLD
[isma-Super-Server:108623] *** MPI_ERR_BUFFER: invalid buffer pointer
[isma-Super-Server:108623] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[isma-Super-Server:108623] *** and potentially your MPI job)


Any help will be really appreciated
CRI_CFD is offline   Reply With Quote

Old   December 10, 2021, 13:54
Default
  #4
Member
 
Richardpluff
Join Date: May 2014
Posts: 95
Rep Power: 11
CRI_CFD is on a distinguished road
Same settings are working properly with OF9:


mpirun -np x snappyHexMesh -parallel



I was considering to jump between releases using alias but mesh files generated with snappy in OF9 are slightly different compared to foam extend ones (for example, version is not included within the text files, so foam extend is sending an error)



Does anyone know what is wrong with MPI implementation of foam-extend? Am I the only one experiencing this issue?


Thanks and sorry for bothering
CRI_CFD is offline   Reply With Quote

Old   December 13, 2023, 06:14
Default
  #5
New Member
 
Yann Scott
Join Date: Oct 2019
Posts: 5
Rep Power: 6
Yann Scott is on a distinguished road
Same erro using foam-extend-4.1. Any updates on how to parallel run cases using this OF version?
Yann Scott is offline   Reply With Quote

Old   March 28, 2024, 08:36
Default
  #6
New Member
 
Milad Bagheri
Join Date: May 2019
Posts: 4
Rep Power: 6
MiladBagheri is on a distinguished road
This is an old thread but in case anyone is looking for a solution, increase the number of cells in blockMeshDict and if it still did not work change your decomposition method to parMetis, if it still did not work try different decompostion method or divisions.
MiladBagheri is offline   Reply With Quote

Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
[mesh manipulation] Importing Multiple Meshes thomasnwalshiii OpenFOAM Meshing & Mesh Conversion 18 December 19, 2015 18:57
Incompatible dimensions for operation ruben23 OpenFOAM Running, Solving & CFD 2 June 12, 2015 04:14
simpleFoam in parallel issue plucas OpenFOAM Running, Solving & CFD 3 July 17, 2013 11:30
[blockMesh] BlockMesh FOAM warning gaottino OpenFOAM Meshing & Mesh Conversion 7 July 19, 2010 14:11
[Gmsh] Import gmsh msh to Foam adorean OpenFOAM Meshing & Mesh Conversion 24 April 27, 2005 08:19


All times are GMT -4. The time now is 09:25.