CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

poor pressure convergence upon restart of parallel case

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   February 3, 2022, 21:54
Default poor pressure convergence upon restart of parallel case
  #1
New Member
 
Juan Salazar
Join Date: Jun 2019
Posts: 19
Rep Power: 6
saladbowl is on a distinguished road
I am alternating between running a IDDES case on a local cluster and larger national lab cluster (HPC). This is because I only get so many chunks of time on the HPC cluster and in the meantime my idea is to run on the local cluster so as to minimize total clock time. However, I am bothered by convergence issues in the pressure solver when I move data from one cluster to the other.

On the local cluster I decompose the domain in 100 subdomains, with 100 cores. Sometimes I can get to 200 depending on the usage of the cluster. When I am ready to run on the HPC cluster, I copy the resulting time, constant and system folders to the HPC cluster with rsync. The I run `reconstructPar -latestTime` followed by `decomposePar -latestTime`, `renumberMesh -overwrite` and start the simulation using `startTime -latestTime` in controlDict. I use about 1000 cores on the HPC cluster. Running on more cores doesn't show favorable scaling and allotted CPU time is fixed.

Upon restart, the number of iterations for the pressure solver reaches `maxIter`, which is already set to a large value (5000). This goes on for many time steps, using up valuable computational time on the HPC cluster.

I should say that I could avoid all this running on the HPC cluster only, but the issue is that it is much easier to run different tests with solvers and such on the local cluster. I also was given time on the HPC cluster after I had been running some time on the local cluster. I am trying to evolve from a RANS steady state solution.

The OpenFOAM versions on the local cluster and HPC cluster are the same, however they were compiled with different gnu and openmpi versions. On both clusters OpenFOAM is compiled with WM_LABEL=64.

I suspect that when I decompose the domain into a different number of subdomains that there will be differences because most preconditioners are parallel inconsistent. However, I did not expect that upon restart the computational cost would be so high.

This issue is baffling me. I am running with pimpleFoam in piso mode. I use scotch for domain decomposition.

Any pointers are highly appreciated. Thanks!
saladbowl is offline   Reply With Quote

Old   February 7, 2022, 04:55
Default
  #2
New Member
 
Juan Salazar
Join Date: Jun 2019
Posts: 19
Rep Power: 6
saladbowl is on a distinguished road
Upon further tests the issue reported above was not reproduced. Not sure what may have been the culprit. In any case, now I find the expected behavior, i.e., upon restart with a different domain decomposition, there is an initial increase in iterations, but nothing abnormal.
saladbowl is offline   Reply With Quote

Reply

Tags
cluster, convergence, hpc, pressure, restart


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Convergence Centurion2011 FLUENT 48 June 14, 2022 23:29
Time-accurate solution restart from steady state solution jyotir SU2 6 December 8, 2021 08:34
Case running in parallel gives error whereby running in serial works Harak OpenFOAM Running, Solving & CFD 8 October 17, 2015 11:12
Force can not converge colopolo CFX 13 October 4, 2011 22:03
Free surface boudary conditions with SOLA-VOF Fan Main CFD Forum 10 September 9, 2006 12:24


All times are GMT -4. The time now is 04:53.