CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM CC Toolkits for Fluid-Structure Interaction (https://www.cfd-online.com/Forums/openfoam-cc-toolkits-fluid-structure-interaction/)
-   -   [solidMechanics] Accelerate Solid Mechanics simulation (elasticPlasticSolidFoam) (https://www.cfd-online.com/Forums/openfoam-cc-toolkits-fluid-structure-interaction/151814-accelerate-solid-mechanics-simulation-elasticplasticsolidfoam.html)

davelgenuine April 17, 2015 10:44

Accelerate Solid Mechanics simulation (elasticPlasticSolidFoam)
 
Hello everybody,

i've tried to accelerate my OpenFOAM Simulation/SolidMechanics/elasticPlasticSolidFoam (foam-extend-3.1). Therefore I use the Debugging mode from OpenFOAM and run my Simulation in parallel. At first, everything run smoothly but after I've changed one Parameter at the DU file (First order Derivation of Displacement):

from:
Quote:

topBrickDown
{
type solidContact;
master yes;
contactActive yes;
rigidMaster yes;
shadowPatch bottomBrickUp;
interpolationMethod ggi;
projectionAlgo visible;
projectionDir contactSphere;
correctionFrequency 500;
normalContactModel standardPenalty;
standardPenaltyNormalModelDict
to:
Quote:

topBrickDown
{
type solidContact;
master yes;
contactActive yes;
rigidMaster no;
shadowPatch bottomBrickUp;
interpolationMethod ggi;
projectionAlgo visible;
projectionDir contactSphere;
correctionFrequency 500;
normalContactModel standardPenalty;
standardPenaltyNormalModelDict
I got this error:

Quote:

Time: 0.2
--> FOAM Warning :
From function GGIInterpolation<MasterPatch, SlavePatch>::calcAddressing()
in file /home/dave/foam/foam-extend-3.1/src/foam/lnInclude/GGIInterpolationWeights.C at line 424
polygonIntersection is returning a zero surface area between
Master face: 178 and Neighbour face: 863 intersection area = 0
Please check the two quick-check algorithms for GGIInterpolation. Something is missing.
--> FOAM Warning :
From function GGIInterpolation<MasterPatch, SlavePatch>::calcAddressing()
in file /home/dave/foam/foam-extend-3.1/src/foam/lnInclude/GGIInterpolationWeights.C at line 424
polygonIntersection is returning a zero surface area between
Master face: 208 and Neighbour face: 900 intersection area = 0
Please check the two quick-check algorithms for GGIInterpolation. Something is missing.
--> FOAM Warning :
From function GGIInterpolation<MasterPatch, SlavePatch>::calcAddressing()
in file /home/dave/foam/foam-extend-3.1/src/foam/lnInclude/GGIInterpolationWeights.C at line 424
polygonIntersection is returning a zero surface area between
Master face: 178 and Neighbour face: 863 intersection area = 0
Please check the two quick-check algorithms for GGIInterpolation. Something is missing.
--> FOAM Warning :
From function GGIInterpolation<MasterPatch, SlavePatch>::calcAddressing()
in file /home/dave/foam/foam-extend-3.1/src/foam/lnInclude/GGIInterpolationWeights.C at line 424
polygonIntersection is returning a zero surface area between
Master face: 208 and Neighbour face: 900 intersection area = 0
Please check the two quick-check algorithms for GGIInterpolation. Something is missing.
[1]
[1]
[1] --> FOAM FATAL ERROR:
[1] index -1 out of range 0 ... 257
[1]
[1] [0]
[0]
[0] --> FOAM FATAL ERROR:
[0] index -1 out of range 0 ... 257
[0]
[0] From function UList<T>::checkIndex(const label)
[0] in file /home/dave/foam/foam-extend-3.1/src/foam/lnInclude/UListI.H at line 124.
[0]
FOAM parallel run aborting
[0]
From function UList<T>::checkIndex(const label)
[1] in file /home/dave/foam/foam-extend-3.1/src/foam/lnInclude/UListI.H at line 124.
[1]
FOAM parallel run aborting
[1]
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
[2]
[2]
[2] --> FOAM FATAL ERROR:
[2] index -1 out of range 0 ... 257
[2]
[2] From function UList<T>::checkIndex(const label)
[2] in file /home/dave/foam/foam-extend-3.1/src/foam/lnInclude/UListI.H at line 124.
[2]
FOAM parallel run aborting
[2]
[3]
[3]
[3] --> FOAM FATAL ERROR:
[3] index -1 out of range 0 ... 257
[3]
[3] From function UList<T>::checkIndex(const label)
[3] in file /home/dave/foam/foam-extend-3.1/src/foam/lnInclude/UListI.H at line 124.
[3]
FOAM parallel run aborting
[3]
[linux-a84b:30469] 3 more processes have sent help message help-mpi-api.txt / mpi-abort
[linux-a84b:30469] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
Could anybody have an Idea, where this error come from and why?

I would be very grateful for any help.

David

davelgenuine April 17, 2015 11:02

Accelerate Solid Mechanics simulation (elasticPlasticSolidFoam)
 
1 Attachment(s)
Hello everybody,

i've tried to accelerate my OpenFOAM Simulation/SolidMechanics/elasticPlasticSolidFoam (foam-extend-3.1). Therefore I use the Debugging mode from OpenFOAM and run my Simulation in parallel.

I got this result per 1 outer Iteration (with 4 parallel Intel processors i7 4790k Quad Core)

http://www.cfd-online.com/Forums/att...1&d=1429283151

My question is, did anybody know how to accelerate the solidMechanics Simulation? If I were right, the calculation calculateDivDSigmaExp.H take the most CPU-Time and maybe not been parallelized, if not, do anyone know how to parallelize it?

I would be thankful for any help

David

bigphil April 28, 2015 00:18

Hi David,

For this warning:
Code:

FOAM Warning :
From function GGIInterpolation<MasterPatch, SlavePatch>::calcAddressing()
in file /home/dave/foam/foam-extend-3.1/src/foam/lnInclude/GGIInterpolationWeights.C at line 424
polygonIntersection is returning a zero surface area between

this normally suggests that something bad has happened with the solidContact boundary conditions. The warning itself comes from the GGI interpolation class which is used to pass the tractions from the slave surface to the master surface.
If you really do want to use the rigidMaster option, then I would recommend trying a lower penaltyScale value and/or more contact under-relaxation; this should help.


As regards calculateDivSigmaExp.H, yes the parallelisation of calculateDivSigmaExp.H is handled automagically by OpenFOAM: each proc performs the explicit field operations (fvc:: operations) just for its own cells and then there is some parallel transfer at the procToProc boundaries.

I suppose it is not very surprising that calculateDivSigmaExp.H takes a lot of time, though I am surprised that the DUEqn.solve() is so small; is this the case when you run more outer iterations?

Best,
Philip

davelgenuine April 30, 2015 10:00

Accelerate Solid Mechanics simulation (elasticPlasticSolidFoam)
 
1 Attachment(s)
Hi Philip,

thank you very much for your answer and suggestion.

I've tried a lower penalty scale till 1e-2, but the warning still came. What do you mean by "more contact under-relaxation", which parameter are they, in DU file or fvSolution file?

I also realised, that the warning come also because of the mesh of my geometries. I have two different mesh on two diferrent solid bodies. The top body is like a spitz of a cantilever beam and has a tetahedra mesh and commuted with the command "polyDualMesh 85 -overwrite" into polyhedra.

The pad in the bottom, which been penetrated by the spitz of the beam has a hexahedral mesh.

If I do checkMesh, the mesh of the top body has a problem with the maxSkewness > 4 and if I corrected it, also with a roughly mesh, then the checkMesh would say ok, because maxSkewnesS is < 4. But then I would get more warning from the solid contact.

With a more fine mesh, I would get fewer warning, but also still some of them, if the beam penetrate the pad more than 20 nanometer (about more then 0.2% of the epsilon, out from the elastic deformation and the beginning of the plastic deformation).

So, maybe you have more solution for the problem above?

According to the accelaration and your question:
Quote:

I suppose it is not very surprising that calculateDivSigmaExp.H takes a lot of time, though I am surprised that the DUEqn.solve() is so small; is this the case when you run more outer iterations?
It depends on which outer iteration is been calculated, my suggestion is also it depends on the sum of the inner iteration, by further simulation time and another divSigmaExp method (with surface), I also get another results (see picture, I've attached)

http://www.cfd-online.com/Forums/att...1&d=1430401739

Quote:

Simulation infos: Mesh generated by Salome, with 4 parallel Intel processors i7 4790k Quad Core, divSigmaExp: surface, relaxationFactors: DU=0,95 DepsilonP=0,95, d2dt2Schemes: Euler, ddtSchemes: steadyState
But still my question according to the accelaration, is there any way or solution to accelarate the calculation of the divSigmaExp? The divSigmaExp: decompose, should not faster than surface? I thought, because of the name "decompose" has a much better performance for a case of parallel run. I've tried it, with the snGrad(DU) "corrected" in fvSchemes file, but it gave a slower performance than the "surface" method...

I would be very grateful for your answer and help.

David

CRI_CFD July 13, 2022 13:06

Hi David,


I am facing a similar problem with this warning. Did you finally find a solution to avoid it?


Thanks in advance


All times are GMT -4. The time now is 22:35.