CFD Online Logo CFD Online URL
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Pre-Processing

changeDictionary for Parallel Case nonuniform 0()

Register Blogs Members List Search Today's Posts Mark Forums Read

Like Tree1Likes
  • 1 Post By KDK

LinkBack Thread Tools Search this Thread Display Modes
Old   October 3, 2015, 19:29
Default changeDictionary for Parallel Case nonuniform 0()
New Member
Keith Kirkpatrick
Join Date: Aug 2014
Location: Ohio, USA
Posts: 3
Rep Power: 7
KDK is on a distinguished road
Hi FOAMers,

The best way I know to explain my question is to contrast two work flows. The one that works and the one that is broke. I don't think the nature of the actual solver or flow physics matters much; my problem is with setting up the initial parallel execution in the case/processorX/0/region/boundaryfile/value nonuniform 0() files.

For those that want to know more, I am happy to post my case at a later time. Truth: I'm actually still editing it, since I blew up my 0 folders on the last changeDictionary attempt to 'fix' the problem. The basic description of the case is chtMultiRegionSimpleFoam with natural convection heat transfer. There are five total regions (Fluid=1, Solid=4). The fluid and solid regions are all STL geometries. The exterior fluid region boundaries match the initial blockMesh face walls & patches. The solid regions are all co-joined within the fluid region and do not share any external boundary patch with the fluid region. A semiImplicit Source is used to generate heat within one of the solid regions and the fluid (air) is supposed to carry the heat away.

CPU Config: Z800 Octal X5570 2.93GHz 12GB / Debian 8.0 / OF2.4.0

The process sequence that works with ~3M Cells on my system:

1a. blockMesh
1b. surfaceFeatureExtract -region [fluid, each solid]
1c. decomposePar -constant
1d. mpirun -np 8 snappyHexMesh -overwrite -parallel
1e. reconstructParMesh -constant
1f. setSet -batch batch.setSet [excellent tip from this forum!]
1g. subsetMesh -overwrite isolation [excellent tip from this forum!]
1h. renumberMesh -overwrite
1i. splitMeshRegions -cellZones -overwrite
1j. changeDictionary -region [fluid, each solid]
1k. decomposePar -allRegions -force
1l. mpirun -np 8 chtMultiRegionSimpleFoam -parallel

The work flow that needs help with ~7M Cells:

2a. blockMesh
2b. surfaceFeatureExtract -region [fluid, each solid]
2c. decomposePar -constant
2d. mpirun -np 8 snappyHexMesh -overwrite -parallel

2e. ==>> reconstructParMesh -constant <<== This line is broke, not enough RAM on a single processor for 32-bit system; cannot execute. This step is now omitted from the workflow. From here on out, go fully parallel decomposition.

2f. mpirun -np 8 setSet -parallel -batch batch.setSet

2g. mpirun -np 8 subsetMesh -parallel -overwrite isolation

(since 2e did not execute, the only way I can make 2g to work is to have the /0 files for each processor pre-baked with all the internal patch interfaces that snappy found during refinement - this pre-baking had to be done as far back as the initial decomposePar -constant, before snappy even runs.)

2h. mpirun -np 8 renumberMesh -overwrite -parallel

2i. mpirun -np 8 splitMeshRegions -cellZones -overwrite -parallel

(At this point, all of the ProcessorX directories have /0/Region directories inside. The boundary files within the region directories are typically ALL wrong because their heritage still flows from the generic case/0 folder that was in existence in step 2c - and had limited region information and no processorX information. Each boundary file within each regional directory needs procBoundary0to1,2,3,4,5,6,7 fixups to cover the parallel decomposition. I add these manually to each boundary file by a cut-and-paste-edit operation.)

and the problem is: The boundary files inside each ProcessorX /0/Region folder contain value nonuniform 0(); elements that vary by: (i)where the solid region was located in the mesh, and (ii)how the processor map split out the mesh during decomposition. It may be possible that 4 of the 8 processors I'm using have no knowledge of a particular solid and their boundary and processor values are all nonuniform 0().

2j. mpirun -np 8 changeDictionary -region [fluid, each solid] -parallel

The change dictionary files I have are set up for each region, not each processor and region combination. I don't know how to handle field patches that are fixedValue in one processor domain and are nonuniform 0() in another. I do not have the wisdom of decomposePar at my disposal, because I use the same field/patch definition in my changeDictionaryDict for the same region across all processors when I run in parallel. I can update individual processor patches with the "procBoundary.*" type of change dictionary entry. However, I still have processor patches that vary from uniform 300 to nonuniform 0() for the same field for the same region in different processors domains.

If I edit the files manually, I have about 160 files to edit (8 processors, 5 regions, ~4 files/region).

I think I need a way to skip over all the nonuniform 0() entries and just change patch names that have a numeric value. Can changeDictionary with an appropriately configured changeDictionaryDict do that?

2k. ==>>decomposePar -allRegions -force <<== step not performed because the blockMesh domain was decomposed back in step 2c and has not been reconstructed. Somehow, the work that this utility performs has to be invoked by way of changeDictionary and/or bash script processing before the solver runs.

2l. mpirun -np 8 chtMultiRegionSimpleFoam -parallel
[solver will typically run for awhile against a variety of non-physical input conditions before it blows up, or you can't see the problem developing until you open ParaView after 100, 1000 iterations.]

Can you understand my problem from what I've written, or are you now sleeping?

Have you encountered a similar work flow as in example #2a-l, or is there another (better) way? My only solution right now is to manually edit the 100+ files and it is making me crazy!

If anyone has a suggestion on how to adjust my changeDictionaryDict files such that they can operate properly on all of the individual ProcessorX/ 0/ region/boundary files during parallel execution - bravo!

Best Regards,

KDK is offline   Reply With Quote

Old   October 7, 2015, 20:07
Default reconstructParMesh / decomposePar Limits
New Member
Keith Kirkpatrick
Join Date: Aug 2014
Location: Ohio, USA
Posts: 3
Rep Power: 7
KDK is on a distinguished road
Hi FOAMers,

The harsh reality is when you are out of memory on a single are done. On a 32-bit machine you cannot get >4G address for a process in RAM unless it has some form of swap-to-disk manager where RAM is only a sliding window of pages. From my experience, reconstructParMesh crashes at ~3+GB on my machine - so I think it is just out of RAM. I am not aware if any of the OF tools use a virtual memory paging scheme.

My guess is that it doesn't matter if I can succeed with reconstructeParMesh unless I can execute decomposPar afterward. For some reason, I had it in my brain that these two utilities had significantly asymmetric capabilities, where decomposePar could do "far more work" than reconstructParMesh. Of course, I made up that illusion as I didn't bother with reading the code

My thinking now is that the RAM image & overhead for all the mesh data in the simulation case is approx the same size if you read it from disk to merge it (reconstructParMesh) or if you read it from disk to re-distribute it (decomposePar). If that assumption is true, then it would appear that a multi-region mesh that exceeds 3-4G on a 32-bit host is simply at the limit of the tools. If you want tools to work beyond that becomes a virtual memory exercise for the student!

If anyone has an alternate theory to my problem, please jump in. Otherwise, I'll assume it was pilot error all along and try to stay out of your airspace.

It would be interesting to know if the principals of OF have given any thought to a VM implementation of these utilities, or if that feature is already available and I simply don't know how to use it....

Bon Voyage,

KDK is offline   Reply With Quote

Old   October 13, 2015, 12:28
New Member
Keith Kirkpatrick
Join Date: Aug 2014
Location: Ohio, USA
Posts: 3
Rep Power: 7
KDK is on a distinguished road
Hi FOAMers!

Sometimes the best way to solve a problem is to embarrass yourself by trying to explain it to someone else. Either your audience will find your mistake right away - or your will. Getting a viable answer is the main thing, however you get there.

Today I had a new thought... Since the /0 directory field files don't know anything about the mesh density, I should be able to formulate my multi-region parallel case first on a smaller mesh density where the OF tools work handily and then simply copy the field/boundary files on top of the bogus files that were created during the high mesh density multi-region parallel only case. None of the patch / processor interfaces have changed, just the number of cells in the mesh. WOW...why didn't I think of that a week ago.

Of course, I'm still going to need intermediate /0 field files in the large mesh density case to resolve the boundary & processor interfaces coming from the original decomposePar / snappyHexMesh so that subsetMesh can run in parallel.

All of the nonuniform 0() issues I mentioned earlier should be resolved naturally by the smaller multi-region meshing process and could be transplanted over into the large parallel case by copying the fixed up field files from each ProcessorX/0/Region folder.

If this doesn't work...I'll make a follow up post. Otherwise, I think this new workflow will allow me to use changeDictionary to perform all my field / boundary fixups and readily support my 7M cell mesh simulation.

Happy FOAMing,

Ramzy1990 likes this.
KDK is offline   Reply With Quote


changedictionary, nonuniform, parallel

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On

Similar Threads
Thread Thread Starter Forum Replies Last Post
Is Playstation 3 cluster suitable for CFD work hsieh OpenFOAM 9 August 16, 2015 14:53
MRFSimpleFoam wind turbine case diverges ysh1227 OpenFOAM Running, Solving & CFD 2 May 7, 2015 10:13
Superlinear speedup in OpenFOAM 13 msrinath80 OpenFOAM Running, Solving & CFD 18 March 3, 2015 05:36
Transient case running with a super computer microfin FLUENT 0 March 31, 2009 11:20
Turbulent Flat Plate Validation Case Jonas Larsson Main CFD Forum 0 April 2, 2004 10:25

All times are GMT -4. The time now is 07:59.