CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Pre-Processing

changeDictionary for Parallel Case nonuniform 0()

Register Blogs Community New Posts Updated Threads Search

Like Tree1Likes
  • 1 Post By KDK

 
 
LinkBack Thread Tools Search this Thread Display Modes
Prev Previous Post   Next Post Next
Old   October 3, 2015, 19:29
Default changeDictionary for Parallel Case nonuniform 0()
  #1
KDK
New Member
 
Keith Kirkpatrick
Join Date: Aug 2014
Location: Ohio, USA
Posts: 3
Rep Power: 11
KDK is on a distinguished road
Hi FOAMers,

The best way I know to explain my question is to contrast two work flows. The one that works and the one that is broke. I don't think the nature of the actual solver or flow physics matters much; my problem is with setting up the initial parallel execution in the case/processorX/0/region/boundaryfile/value nonuniform 0() files.

For those that want to know more, I am happy to post my case at a later time. Truth: I'm actually still editing it, since I blew up my 0 folders on the last changeDictionary attempt to 'fix' the problem. The basic description of the case is chtMultiRegionSimpleFoam with natural convection heat transfer. There are five total regions (Fluid=1, Solid=4). The fluid and solid regions are all STL geometries. The exterior fluid region boundaries match the initial blockMesh face walls & patches. The solid regions are all co-joined within the fluid region and do not share any external boundary patch with the fluid region. A semiImplicit Source is used to generate heat within one of the solid regions and the fluid (air) is supposed to carry the heat away.

CPU Config: Z800 Octal X5570 2.93GHz 12GB / Debian 8.0 / OF2.4.0

The process sequence that works with ~3M Cells on my system:

1a. blockMesh
1b. surfaceFeatureExtract -region [fluid, each solid]
1c. decomposePar -constant
1d. mpirun -np 8 snappyHexMesh -overwrite -parallel
1e. reconstructParMesh -constant
1f. setSet -batch batch.setSet [excellent tip from this forum!]
1g. subsetMesh -overwrite isolation [excellent tip from this forum!]
1h. renumberMesh -overwrite
1i. splitMeshRegions -cellZones -overwrite
1j. changeDictionary -region [fluid, each solid]
1k. decomposePar -allRegions -force
1l. mpirun -np 8 chtMultiRegionSimpleFoam -parallel

The work flow that needs help with ~7M Cells:

2a. blockMesh
2b. surfaceFeatureExtract -region [fluid, each solid]
2c. decomposePar -constant
2d. mpirun -np 8 snappyHexMesh -overwrite -parallel

2e. ==>> reconstructParMesh -constant <<== This line is broke, not enough RAM on a single processor for 32-bit system; cannot execute. This step is now omitted from the workflow. From here on out, go fully parallel decomposition.

2f. mpirun -np 8 setSet -parallel -batch batch.setSet

2g. mpirun -np 8 subsetMesh -parallel -overwrite isolation

(since 2e did not execute, the only way I can make 2g to work is to have the /0 files for each processor pre-baked with all the internal patch interfaces that snappy found during refinement - this pre-baking had to be done as far back as the initial decomposePar -constant, before snappy even runs.)

2h. mpirun -np 8 renumberMesh -overwrite -parallel

2i. mpirun -np 8 splitMeshRegions -cellZones -overwrite -parallel

(At this point, all of the ProcessorX directories have /0/Region directories inside. The boundary files within the region directories are typically ALL wrong because their heritage still flows from the generic case/0 folder that was in existence in step 2c - and had limited region information and no processorX information. Each boundary file within each regional directory needs procBoundary0to1,2,3,4,5,6,7 fixups to cover the parallel decomposition. I add these manually to each boundary file by a cut-and-paste-edit operation.)

and the problem is: The boundary files inside each ProcessorX /0/Region folder contain value nonuniform 0(); elements that vary by: (i)where the solid region was located in the mesh, and (ii)how the processor map split out the mesh during decomposition. It may be possible that 4 of the 8 processors I'm using have no knowledge of a particular solid and their boundary and processor values are all nonuniform 0().

2j. mpirun -np 8 changeDictionary -region [fluid, each solid] -parallel

The change dictionary files I have are set up for each region, not each processor and region combination. I don't know how to handle field patches that are fixedValue in one processor domain and are nonuniform 0() in another. I do not have the wisdom of decomposePar at my disposal, because I use the same field/patch definition in my changeDictionaryDict for the same region across all processors when I run in parallel. I can update individual processor patches with the "procBoundary.*" type of change dictionary entry. However, I still have processor patches that vary from uniform 300 to nonuniform 0() for the same field for the same region in different processors domains.

If I edit the files manually, I have about 160 files to edit (8 processors, 5 regions, ~4 files/region).

I think I need a way to skip over all the nonuniform 0() entries and just change patch names that have a numeric value. Can changeDictionary with an appropriately configured changeDictionaryDict do that?

2k. ==>>decomposePar -allRegions -force <<== step not performed because the blockMesh domain was decomposed back in step 2c and has not been reconstructed. Somehow, the work that this utility performs has to be invoked by way of changeDictionary and/or bash script processing before the solver runs.

2l. mpirun -np 8 chtMultiRegionSimpleFoam -parallel
[solver will typically run for awhile against a variety of non-physical input conditions before it blows up, or you can't see the problem developing until you open ParaView after 100, 1000 iterations.]

Can you understand my problem from what I've written, or are you now sleeping?

Have you encountered a similar work flow as in example #2a-l, or is there another (better) way? My only solution right now is to manually edit the 100+ files and it is making me crazy!

If anyone has a suggestion on how to adjust my changeDictionaryDict files such that they can operate properly on all of the individual ProcessorX/ 0/ region/boundary files during parallel execution - bravo!

Best Regards,

Keith
KDK is offline   Reply With Quote

 

Tags
changedictionary, nonuniform, parallel


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Is Playstation 3 cluster suitable for CFD work hsieh OpenFOAM 9 August 16, 2015 14:53
MRFSimpleFoam wind turbine case diverges ysh1227 OpenFOAM Running, Solving & CFD 2 May 7, 2015 10:13
Superlinear speedup in OpenFOAM 13 msrinath80 OpenFOAM Running, Solving & CFD 18 March 3, 2015 05:36
Transient case running with a super computer microfin FLUENT 0 March 31, 2009 11:20
Turbulent Flat Plate Validation Case Jonas Larsson Main CFD Forum 0 April 2, 2004 10:25


All times are GMT -4. The time now is 23:42.