|
[Sponsors] |
October 1, 2018, 03:09 |
Mpirun for 'blockMesh'
|
#1 |
New Member
Srikar Reddy Palla
Join Date: May 2018
Posts: 19
Rep Power: 7 |
Hello all, I am working on a project for which needs a multiphase simulation and to capture the smaller changes properly, I need to use a very fine mesh. When i am doing so process is getting killed even by HPC. So, i am thinking of using 'parallel running' by mpirun command for blockMesh. Is it possible to do so? Any other idea to solve this problem?
Thank you in advance. |
|
October 1, 2018, 03:48 |
|
#2 |
New Member
Lorena Fernández Fernández
Join Date: May 2016
Location: Spain
Posts: 21
Rep Power: 9 |
Hi Scrikar!
I think that BlockMesh doesn't' have the option "-parallel". But the point it is why you need that. Run blockMesh is really fast and doesn't need many HPC resources. You can check the output text and the blockMesh dictionary. Regards, Lorena |
|
October 1, 2018, 04:17 |
|
#3 |
New Member
Srikar Reddy Palla
Join Date: May 2018
Posts: 19
Rep Power: 7 |
Sir, i need to capture a phenomenon of a bubble as small as 2.5mm diameter for that i have gone through a mesh of v.v.fine mesh. That high was unable to do in PC's. So, i went for doing in HPC, even there process has been killed. I thought because of v.big size mesh and only 1 core could have been used and resulted in killing the program. If somehow i could use parallel processing (i understand geometry modelling and meshing in parallel some what odd) the meshing may have been done.
|
|
October 1, 2018, 04:31 |
|
#4 |
New Member
Lorena Fernández Fernández
Join Date: May 2016
Location: Spain
Posts: 21
Rep Power: 9 |
I find it strange that blockMesh fails you due to lack of resources. Have you tried to do the same mesh with fewer cells and it works? How many cells does your mesh have?
|
|
October 1, 2018, 06:31 |
|
#5 |
New Member
Srikar Reddy Palla
Join Date: May 2018
Posts: 19
Rep Power: 7 |
Yup, initially I tried with a coarse mesh at that time mesh is as "(50,800,50)" and everything else was good. We got a result but not to the minute extent we want it to be. So, I tried with the mesh of "(600,2500,600)" so that the smallest drop even which is of 2.5mm diameter can have 10 cells inside it and can be visualized. My geometry is a simple cuboid with "150mm*622mm*150mm" dimensions. That very mesh used to be just resulted in "killed" output by saying "system is low of virtual memory" that looks to be a clear indication of low RAM or PROCESSING power. So, I went for utilising HPC but there it tried to do meshing like it is giving output as "creating cells" etc kind afterwards it took a couple of minutes and gave me direct output as "process killed" and stops there.
|
|
October 1, 2018, 06:46 |
|
#6 |
New Member
Lorena Fernández Fernández
Join Date: May 2016
Location: Spain
Posts: 21
Rep Power: 9 |
Ok, it is a really huge mesh. Maybe you can start with a coarse mesh and refine it. SnappyHexMesh can be used in parallel and allow you to refine an initial coarse mesh. Also, refineMesh utility can be used in parallel. But in your case, I will try to refine only the interesting area using snappyhexMesh or interDyMFoam. If you continue with a so huge mesh you probably must have problems with the running and post-processing.
|
|
October 1, 2018, 06:48 |
|
#7 |
Super Moderator
Tobias Holzmann
Join Date: Oct 2010
Location: Tussenhausen
Posts: 2,708
Blog Entries: 6
Rep Power: 51 |
My new personal record of mesh cells. (600 2500 600) should correspond to 900.000.000 cells if these are your splitting numbers in blockMesh. Are you sure that you need that amount of cells? And are you sure that the model you are using is valid for your investigation? Just keep in mind, there are also other theories rather than FVM. Maybe it´s better to switch.
To your question, as it was already stated, blockMesh don´t support parallel execution.
__________________
Keep foaming, Tobias Holzmann |
|
October 1, 2018, 16:59 |
snappy and selective refinement
|
#8 |
Member
Peter Brady
Join Date: Apr 2014
Location: Sydney, NSW, Australia
Posts: 54
Rep Power: 11 |
Following on here from what has been said. I'm supportive of all the comments.
I do a lot of free surface and interface simulations and the key is not refinement over the whole domain - you only need super refinement in the neighbourhood of the interface. Even then, you only need maximum refinement when the curvature of the free surface increases. I suggest that you seriously investigate refinement in the region of your free surfaces and particularly where the curvatures are likely to be highest. Away from these zones you can have very large cells - in fact I encourage you to expand the cell size in the farfield. Cheers, -pete |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
mpirun hangup.. can anyone help how to fix mpirun issues. | prameelar | OpenFOAM | 12 | February 16, 2022 16:23 |
MPIrun Problem with FoamJob Workaround | kaszt | OpenFOAM Running, Solving & CFD | 3 | October 4, 2018 12:55 |
mpirun unable to find SU2_PRT | Apollinaris | SU2 Installation | 1 | May 10, 2017 05:31 |
using mpirun | EmadTandis | OpenFOAM Running, Solving & CFD | 1 | December 4, 2016 14:29 |
OF mpirun and parallel problem | heksel8i | OpenFOAM Running, Solving & CFD | 2 | September 11, 2013 05:33 |