CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Programming & Development (https://www.cfd-online.com/Forums/openfoam-programming-development/)
-   -   Load balancing with topoChanger LayerAdditionRemoval Piston movement solver (https://www.cfd-online.com/Forums/openfoam-programming-development/196950-load-balancing-topochanger-layeradditionremoval-piston-movement-solver.html)

Bloerb December 22, 2017 06:00

Load balancing with topoChanger LayerAdditionRemoval Piston movement solver
 
A while ago I wrote a mesh motion solver with layer addition and removal. It moves a patch based on a specified velocity and deletes or adds cells based on the direction of the movement velocity. It does hence only change the mesh near the moving wall. Applicable to e.g a piston in an engine or a bicycle pump. This is e.g helpful for interFoam simulations where a simple compression of cells would lead to high aspect ratios for large deformations.

A test case and the solver can be found here (for older OpenFOAM versions replace Function1 with DataEntry):
https://github.com/bloerb/linearMotionLayersFvMesh

I have recently looked at this again and hence uploaded it. Since it deletes or adds cells the solver will eventually run into problems on parallel runs. If all cells on a processor are deleted it will naturally crash. With proper decomposition it will run fine in parallel though. This does however often demand a manual decomposition. My aim is hence to do load balancing. This is done with fvMeshDistribute and mapDistributePolyMesh as far as I can tell.
I basically just want to redistribute it each time the processor my patch is on will run out of cells. The redistribution would suffice if it would simply call redistributePar on runTime. Maybe someone has looked into this before and give me some pointers. A draft can be found in the balance part of linearMotionLayersFvMesh.C. I have this nagging felling that I am missing something that can essentially be done in a few lines of codes.

Best regards

blttkgl January 9, 2019 04:32

2 Attachment(s)
Hey,



Thanks for the code. I used it to validate some 2D reacting compression autoignition and it works quite nice. In addition to the problem of ranks "running out of cells" problem during parallel run, I have also noticed that the simulation is quite dependent on decomposition method itself as well. I tried to run a 2D X-Z case with compression in Z direction(picture attached) and decomposed it in the X direction so all ranks will have some cells throughout the simulation. In this decomposition x direction is 75 mm and I use 8 processors, which gave me these "wedges" in the geometry rank boundary (red circles) and moveDynamicMesh crashed. However, if I divide it by 5 rank then the division exactly correspond to cell boundary faces and everything runs smoothly.


Have you experienced something like this in your tests. I have managed to avoid this problem by manually decomposing this simple 2D geometry, but as geometry gets more complex this might not be a viable solution.


Best,


Bulut

Bloerb January 9, 2019 05:17

Yes this is exactly what i experienced. I also used manual decomposition to circumvent this error. I have however not looked any further into the load balancing that would remove the crashes due to it. If your geometry allows it the best way might be to use a solver that does not use topology changes like displacementInterpolation. For large deformations your aspect ratio might increase drastically though

blttkgl January 9, 2019 07:40

Thanks for the reply. Not changing the mesh topology is of course easier but it then leads to quite high aspect ratio cells if the compression ratio is too high, as you mentioned.



I managed to manually decompose my domain using the idea given in this post: https://www.cfd-online.com/Forums/op...setfields.html . As for dynamic load balancing, we have implemented it for chemistry calculations, although finite-rate chemistry calculation is restricted to individual cell and does not need neighbor info. I'll keep this post updated if I manage to solve this issue.


Best,


Bulut

Sean95 January 25, 2019 10:57

1 Attachment(s)
I'm just wondering if anyone has got load balancing working. I have run into similar issues with the decomposition of my fluid domain, and as such have only been able to run my case in parallel with four processors. I have tried scotch but it doesn't perfectly spilt the cells between the processors and crashes. My fluid domain has over a million cells so manual decomp is out.

blttkgl February 18, 2019 14:14

Hey,


Why would manual decomposition would be out? It has nothing to do with number of cells, the only problem is to use setFields properly. If you look at the manual decomposition thread I linked above, I successfully used that idea to decompose domains for over 5M cells to use layer removal and compression at the same time. The trick is each rank should have a continuous straight edge with the its neighboring rank in the direction of compression (see the above figure).


I ended up abandoning this method for other reasons but you should be able to manually decompose your domain using manual decomposition if your mesh is uniform.


Bulut

Sean95 February 25, 2019 14:37

1 Attachment(s)
I tried the method you linked and it worked perfectly, thanks :D
I used rotatedBoxToCell instead of boxToCell in setFieldsDict, I should be able to use the same approach for greater levels of decompostion, i.e. 16,32,64.

blttkgl February 25, 2019 15:01

Glad it worked out fine. I ended up not using this since I have non-uniform refinement regions within my 3D mesh that'd be hard to decompose as smooth as a uniform one, but as long as you have a somewhat structured mesh it should work nicely.


Bulut

2538sukham March 19, 2024 23:14

Which version is this @Bloerb? I want to try this out on OpenFoam v2212. So far, there are sphereDrop case and movingCone for dynamic layer addition/removal. But I am not able to achieve linearValveLayersFvMesh in the above said version. =sad=


All times are GMT -4. The time now is 01:54.