CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Pre-Processing

Error when running setFields in parallel

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   January 11, 2021, 07:53
Default Error when running setFields in parallel
  #1
Member
 
Haoran Zhou
Join Date: Nov 2019
Posts: 49
Rep Power: 6
Stan Zhou is on a distinguished road
Hi all,

Recently I tried to run setFields in parallel but a problem occurred.

The commands I executed are as follows:
1. blockMesh
2. cp -r 0.org 0
3. decomposePar
4. mpirun -np 4 snappyHexMesh -parallel -overwrite
5. mpirun -np 4 setFields -parallel

During step 5, I got the FOAM FATAL IO ERROR:
Cannot find patchField entry for caisson
The error directs to the alpha.water files in every processor. However, I've already added the boundaryField 'caisson' in the alpha.water file of 0 folder before executing decomposePar.

Do I miss any steps before running setFields in parallel? Is there any methods to deal with this problem?

Thanks in advance!

Best regards,

Stan
Stan Zhou is offline   Reply With Quote

Old   January 25, 2021, 05:40
Default
  #2
New Member
 
Federico
Join Date: Jan 2021
Posts: 13
Rep Power: 5
Federico_ is on a distinguished road
Why do you need to run setFields in parallel? I'm not expert but setFields in general is a really quick application and requires low computational power so I launch it and then decompose to run the simulation
Federico_ is offline   Reply With Quote

Old   January 25, 2021, 10:10
Default
  #3
Senior Member
 
Yann
Join Date: Apr 2012
Location: France
Posts: 1,065
Rep Power: 26
Yann will become famous soon enough
Quote:
Originally Posted by Federico_ View Post
Why do you need to run setFields in parallel? I'm not expert but setFields in general is a really quick application and requires low computational power so I launch it and then decompose to run the simulation
Because Stan uses snappyHexMesh to mesh the geometry in parallel. He has to use setFields after creating the mesh and it is a loss of time to reconstruct the mesh to use setFields and finally re-decompose it to run the simulation.

Stan, have you checked the content of the processor*/0/alpha.water file after running decomposePar? Does it still contain the boundary condition for the patch "caisson" ?
Yann is online now   Reply With Quote

Old   January 25, 2021, 12:11
Default
  #4
New Member
 
Federico
Join Date: Jan 2021
Posts: 13
Rep Power: 5
Federico_ is on a distinguished road
Yeah I know but I had many problems to launch the simulation without reconstructing the mesh because OF didn't read 0 in each processor. For this I started to use a directory in which I mesh and one in which I simulate and in general even if depends on the number of cells the reconstruction doesn't require so much time compared to snappyhexmesh so I think it is good use reconstructParMesh
Federico_ is offline   Reply With Quote

Old   January 26, 2021, 03:44
Default
  #5
Senior Member
 
Yann
Join Date: Apr 2012
Location: France
Posts: 1,065
Rep Power: 26
Yann will become famous soon enough
It might work for light meshes, but it quickly gets inefficient on larger meshes which might take hours to reconstruct.

Of course it depends on the size of the simulations you run but in my opinion it would be better to solve the issue in the first place. I never reconstruct any of my simulation cases.


To come back to Stan's problem (and maybe your), the issue comes from decomposePar which decompose the fields according to the patches in polyMesh.

When you run blockMesh, then decomposePar, it will only keep the patches contained in the current mesh and you will loose all the boundary conditions you have defined for the final mesh (these patches are not created yet since snappyHexMesh has not been run yet)

To avoid this, I usually keep my boundary conditions in a 0.orig directory and I copy the content of this directory in processor*/0 after running snappyHexMesh. (you might need to add a boundary condition for the "procBoundary.*" patches)

Another solution might be to use decomposePar with the -copyZero option.


I hope this helps.
Yann
Yann is online now   Reply With Quote

Old   January 26, 2021, 06:07
Default
  #6
New Member
 
Federico
Join Date: Jan 2021
Posts: 13
Rep Power: 5
Federico_ is on a distinguished road
Quote:
Originally Posted by Yann View Post
It might work for light meshes, but it quickly gets inefficient on larger meshes which might take hours to reconstruct.

Of course it depends on the size of the simulations you run but in my opinion it would be better to solve the issue in the first place. I never reconstruct any of my simulation cases.


To come back to Stan's problem (and maybe your), the issue comes from decomposePar which decompose the fields according to the patches in polyMesh.

When you run blockMesh, then decomposePar, it will only keep the patches contained in the current mesh and you will loose all the boundary conditions you have defined for the final mesh (these patches are not created yet since snappyHexMesh has not been run yet)

To avoid this, I usually keep my boundary conditions in a 0.orig directory and I copy the content of this directory in processor*/0 after running snappyHexMesh. (you might need to add a boundary condition for the "procBoundary.*" patches)

Another solution might be to use decomposePar with the -copyZero option.


I hope this helps.
Yann
This is why it doesn't work just copying 0/ in processor*. But for me the problem was that I wasn't able to write procBoundaryxtoy in each file of each processor.

Not to be insistent but even with 40 million of cells on cluster I always spent almost 10 minutes but no more.
Federico_ is offline   Reply With Quote

Old   January 26, 2021, 07:52
Default
  #7
Senior Member
 
Yann
Join Date: Apr 2012
Location: France
Posts: 1,065
Rep Power: 26
Yann will become famous soon enough
Quote:
Originally Posted by Federico_ View Post
This is why it doesn't work just copying 0/ in processor*. But for me the problem was that I wasn't able to write procBoundaryxtoy in each file of each processor.
Use regular expressions! (https://cfd.direct/openfoam/user-gui...7-1350004.2.12)
Add this in each variable file in 0 directory:

Code:
"procBoundary.*"
{
    type processor;
  }
Now you can copy your 0 directory in processor*

Cheers,
Yann
Yann is online now   Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
[Other] blueCFD-Core-2016 user compiled solvers not running in parallel sbence OpenFOAM Installation 10 December 5, 2018 08:44
Error running openfoam in parallel fede32 OpenFOAM Programming & Development 5 October 4, 2018 16:38
error while running in parallel using openmpi on local mc 6 processors suryawanshi_nitin OpenFOAM 10 February 22, 2017 21:33
How to use parallel running to the most? 6863523 OpenFOAM Running, Solving & CFD 5 January 19, 2017 02:22
Problems running in parallel - missing controlDict Argen OpenFOAM Running, Solving & CFD 4 June 7, 2012 03:50


All times are GMT -4. The time now is 07:32.