|
[Sponsors] |
[snappyHexMesh] reconstructPar after running snappyhexmesh in parallel |
|
LinkBack | Thread Tools | Search this Thread | Display Modes |
October 17, 2020, 08:53 |
reconstructPar after running snappyhexmesh in parallel
|
#1 |
New Member
David
Join Date: Oct 2020
Posts: 21
Rep Power: 6 |
Hi I would like to reconstruct my case after running snappyhexmesh in parallel but I am unable to achieve that. Whenever I try I get the following error message:
Code:
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // Create time --> FOAM Warning : From function int main(int, char**) in file reconstructPar.C at line 256 Code:
--> FOAM FATAL IO ERROR: [5] Cannot find patchField entry for heating [5] [5] file: /cluster/scratch/kaeserd/07_againg_new_coordinates_infolow/processor5/0/T.boundaryField [5] [5] From function void Foam::GeometricField<Type, PatchField, GeoMesh>::Boundary::readField(const Foam::DimensionedField<TypeR, GeoMesh>&, const Foam::dictionary&) [with Type = double; PatchField = Foam::fvPatchField; GeoMesh = Foam::volMesh] [5] in file /dev/shm/spackapps/spack-stage/spack-stage-7EXqC3/OpenFOAM-v1806/src/OpenFOAM/lnInclude/GeometricBoundaryField.C at line 191. [5] FOAM parallel run exiting Code:
boundaryField { inlet { type fixedValue; value uniform 298.15; } outlet { type zeroGradient; } ground { type fixedValue; value uniform 333.15; } frontAndBack { type zeroGradient; } procBoundary0to1 { type processor; value uniform 298.15; } procBoundary0to3 { type processor; value uniform 298.15; } procBoundary0to12 { type processor; value uniform 298.15; } procBoundary0to15 { type processor; value uniform 298.15; } } Code:
/*--------------------------------*- C++ -*----------------------------------*\ | ========= | | | \\ / F ield | OpenFOAM: The Open Source CFD Toolbox | | \\ / O peration | Version: 2.4.0 | | \\ / A nd | Web: www.OpenFOAM.org | | \\/ M anipulation | | \*---------------------------------------------------------------------------*/ FoamFile { version 2.0; format ascii; class dictionary; object snappyHexMeshDict; } // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // #includeEtc "caseDicts/mesh/generation/snappyHexMeshDict.cfg" castellatedMesh on; snap on; addLayers off; geometry { heating.stl { type triSurfaceMesh; scale 0.001; name heating; } buildings.stl { type triSurfaceMesh; scale 0.001; name buildings; } }; castellatedMeshControls { // nCellsBetweenLevels 3; features ( { file "heating.eMesh"; scale 0.001; level 3; } { file "buildings.eMesh"; scale 0.001; level 3; } ); refinementSurfaces { buildings { level (2 3); patchInfo { type wall; } } heating { level (2 3); patchInfo { type wall; } } } refinementRegions { heating { mode distance; levels ((0.077 2)); } buildings { mode distance; levels ((0.077 2)); } } locationInMesh (2 1 1); resolveFeatureAngle 60; allowFreeStandingZoneFaces true; } snapControls { //- Number of patch smoothing iterations before finding correspondence // to surface nSmoothPatch 3; //- Relative distance for points to be attracted by surface feature point // or edge. True distance is this factor times local // maximum edge length. tolerance 2.0; //- Number of mesh displacement relaxation iterations. nSolveIter 30; //- Maximum number of snapping relaxation iterations. Should stop // before upon reaching a correct mesh. nRelaxIter 5; // Feature snapping //- Number of feature edge snapping iterations. // Leave out altogether to disable. nFeatureSnapIter 10; //- Detect (geometric) features by sampling the surface (default=false) implicitFeatureSnap true; //- Use castellatedMeshControls::features (default = true) explicitFeatureSnap false; } addLayersControls { layers { "CAD.*" { nSurfaceLayers 2; } } relativeSizes true; expansionRatio 1.2; finalLayerThickness 0.5; minThickness 1e-3; } meshQualityControls { maxConcave 70; minTetQuality 1E-12; maxInternalSkewness 5; maxBoundarySkewness 25; } writeFlags ( scalarLevels layerSets layerFields ); mergeTolerance 1e-6; // ************************************************************************* // Code:
/*--------------------------------*- C++ -*----------------------------------*\ | ========= | | | \\ / F ield | OpenFOAM: The Open Source CFD Toolbox | | \\ / O peration | Version: 2.4.0 | | \\ / A nd | Web: www.OpenFOAM.org | | \\/ M anipulation | | \*---------------------------------------------------------------------------*/ FoamFile { version 2.0; format ascii; class dictionary; object decomposeParDict; } // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // numberOfSubdomains 48; method hierarchical; // method ptscotch; simpleCoeffs { n (4 1 1); delta 0.001; } hierarchicalCoeffs { n (3 4 4); delta 0.001; order xyz; } manualCoeffs { dataFile "cellDecomposition"; } // ************************************************************************* // blockMesh surfaceFeatureExtract decomposePar mpirun -np 48 snappyHexMesh -overwrite -parallel (I would like to reconstruct here) mpirun -n 48 buoyantBoussinesqSimpleFoam -parallel (and here ) Until now I run snappyhexmesh is series and in this case it worked. Thank you a lot and sorry if my question should miss some important information. Thanks a lot in advance best regards fidu13 Last edited by fidu; October 18, 2020 at 15:06. |
|
October 18, 2020, 14:10 |
|
#2 |
Senior Member
Zander Meiring
Join Date: Jul 2018
Posts: 125
Rep Power: 8 |
Have you tried to make use of the scotch method for decomposition? it could be that your structured deconstruction leads to breaking some of the boundaries in a weird way
|
|
October 18, 2020, 15:11 |
|
#3 |
New Member
David
Join Date: Oct 2020
Posts: 21
Rep Power: 6 |
Thanks a lot for your reply! Not yet but will try as soon as possible. However I meant the following boundary conditions as weird:
Code:
procBoundary0to1 { type processor; value uniform 298.15; } thanks again and sorry for the confusion |
|
October 18, 2020, 19:33 |
|
#4 |
Member
Hasan Celik
Join Date: Sep 2016
Posts: 64
Rep Power: 10 |
I didn't understand the problem so well, and I am sorry if you have done it and failed but have you tried reconstructParMesh after parallel run of snappyHexMesh? For instance, something like that:
Code:
reconstructParMesh -latestTime -mergeTol 1E-06 -noZero |
|
October 18, 2020, 22:23 |
|
#5 | |
Senior Member
Join Date: Aug 2013
Posts: 407
Rep Power: 16 |
Hi,
Hasan is right. If you have meshed in parallel, then you must do Code:
reconstructParMesh Code:
reconstructPar So coming back to your workflow: Quote:
Code:
reconstructParMesh Code:
snappyHexMesh Once you have run your solver, you can run Code:
reconstructPar Code:
reconstructParMesh Code:
snappyHexMesh Code:
reconstructPar Hope this helps Cheers, Antimony |
||
October 19, 2020, 05:24 |
|
#6 |
New Member
David
Join Date: Oct 2020
Posts: 21
Rep Power: 6 |
Hi and thanks a lot for your help. I did now manage to run snappy and reconstructParMesh. If I now try to run the solver I still get the following error message from each processor however:
Code:
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // Create time Create mesh for time = 0 [eu-g1-029-4:29347] 47 more processes have sent help message help-mpi-btl-openib.txt / error in device init [eu-g1-029-4:29347] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages SIMPLE: convergence criteria field p_rgh tolerance 0.0001 field Ux tolerance 0.0001 field Uy tolerance 0.0001 field Uz tolerance 0.0001 field T tolerance 0.0001 field "(k|epsilon|omega)" tolerance 0.0001 Reading thermophysical properties Reading field T [3] [3] [3] --> FOAM FATAL IO ERROR: [3] Cannot find patchField entry for heating [3] [3] file: /cluster/scratch/kaeserd/parallel/1/new_case/05_new_coordinates_inflow/processor3/0/T.boundaryField [3] [3] From function void Foam::GeometricField<Type, PatchField, GeoMesh>::Boundary::readField(const Foam::DimensionedField<TypeR, GeoMesh>&, const Foam::dictionary&) [with Type = double; PatchField = Foam::fvPatchField; GeoMesh = Foam::volMesh] [3] in file /dev/shm/spackapps/spack-stage/spack-stage-7EXqC3/OpenFOAM-v1806/src/OpenFOAM/lnInclude/GeometricBoundaryField.C at line 191. [3] FOAM parallel run exiting [3] Code:
/*--------------------------------*- C++ -*----------------------------------*\ | ========= | | | \\ / F ield | OpenFOAM: The Open Source CFD Toolbox | | \\ / O peration | Version: v1806 | | \\ / A nd | Web: www.OpenFOAM.com | | \\/ M anipulation | | \*---------------------------------------------------------------------------*/ FoamFile { version 2.0; format ascii; class volScalarField; location "0"; object T; } // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // dimensions [0 0 0 1 0 0 0]; internalField uniform 298.15; boundaryField { inlet { type fixedValue; value uniform 298.15; } outlet { type zeroGradient; } ground { type fixedValue; value uniform 333.15; } frontAndBack { type zeroGradient; } procBoundary0to1 { type processor; value uniform 298.15; } procBoundary0to3 { type processor; value uniform 298.15; } procBoundary0to12 { type processor; value uniform 298.15; } procBoundary0to15 { type processor; value uniform 298.15; } } Thanks again for your help. |
|
October 19, 2020, 05:58 |
|
#7 |
Senior Member
Yann
Join Date: Apr 2012
Location: France
Posts: 1,174
Rep Power: 27 |
Hello Fidu,
Try to open the boundary file located in processor*/constant/polyMesh/boundary Here you will see what are the names of your domain's boundaries. Each boundary should have a boundary condition defined for each variable in the 0 folder. Looking at the error message in your last post, you have a boundary named "heating" and the solver crash because there is no boundary condition defined for this patch. As long as you use decomposePar to distribute your 0 folder, you should not have to worry about "procBoundary*" patches. Those are the interface patches between each processor and they are automatically created by decomposePar. I hope this helps, Yann |
|
October 19, 2020, 09:54 |
|
#8 |
New Member
David
Join Date: Oct 2020
Posts: 21
Rep Power: 6 |
Hi Yann
Thanks for your reply. I checked my boundary file in processor*/constant/polyMesh/boundary and there heating is defined. However when I checked processor*/0/T was heating not defined. Also I noticed that in the other files(U, p, p_rgh, alphat, epsilon ect) the heating condition was not defined, even thought I did define them in the original 0 directory. As I tried to change it manually and running the solver again I got the same error message. To change it I did open the file with manueally with vi processor*/0/T . How can I solve this? |
|
October 19, 2020, 09:55 |
|
#9 | |
Member
Hasan Celik
Join Date: Sep 2016
Posts: 64
Rep Power: 10 |
Quote:
May you share your case file here? Did you write your result of the parallel mesh to the constant folder, or did it write it as another step such as 1 or 2? |
||
October 19, 2020, 10:47 |
|
#10 |
New Member
David
Join Date: Oct 2020
Posts: 21
Rep Power: 6 |
Sure. I have attached my entire case as a zip. So far my workflow was this:
When I remove all processor after reconstructParMesh and then decompose the case again I get almost the right boundary conditions. Just in processor*/0/U the the condition changed from uniform (0 0 0); to nonuniform 0(); for the heating boundary field. Last edited by fidu; October 20, 2020 at 09:01. |
|
October 19, 2020, 12:22 |
|
#11 | |
Member
Hasan Celik
Join Date: Sep 2016
Posts: 64
Rep Power: 10 |
Quote:
Code:
reconstructParMesh -constant -mergeTol 1E-06 -noZero |
||
October 20, 2020, 09:01 |
|
#12 |
New Member
David
Join Date: Oct 2020
Posts: 21
Rep Power: 6 |
Thanks a lot it is working now!!!
|
|
Tags |
decomposed mesh, openfoam 1806, parallel, snappyhesmeshdict |
Thread Tools | Search this Thread |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
wrong zero boundary dir in parallel after snappyHexMesh | HagenC | OpenFOAM Pre-Processing | 2 | March 13, 2017 04:47 |
Running AMI case in parallel | Kaskade | OpenFOAM Running, Solving & CFD | 3 | March 14, 2016 15:58 |
simpleFoam parallel | AndrewMortimer | OpenFOAM Running, Solving & CFD | 12 | August 7, 2015 18:45 |
Fluent 14.0 file not running in parallel mode in cluster | tejakalva | FLUENT | 0 | February 4, 2015 07:02 |
Running CFX parallel distributed Under linux system with loadleveler queuing system | ahmadbakri | CFX | 1 | December 21, 2014 04:19 |