CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Running, Solving & CFD (https://www.cfd-online.com/Forums/openfoam-solving/)
-   -   Simulation "Blows-Up" After Restart (https://www.cfd-online.com/Forums/openfoam-solving/158758-simulation-blows-up-after-restart.html)

fatirishman53 September 2, 2015 22:15

Simulation "Blows-Up" After Restart
 
I am running a DNS simulation with a modified version of the icoFoam solver that solves the Navier Stokes equation based off of a constant pressure gradient provided within the transportProperties file. (This was necessary due to cyclic boundary conditions and the need to let the velocity profile develop on its own)

The simulation ran fine from 0s to 40000s. Originally I had the simulation set to end at 60000s, but I found out that the PBS system had split 1 node into 128 processes instead of splitting 4 nodes into 32 processes each. So, I changed the endTime to 40000s during run time. Everything ended normally and the simulation was reconstructed. At 40000 the Courant number had reached about 0.8. Though it was increasing, it was doing so very slowly and should have finished the full 60000s still < 1.

During troubleshooting of the PBS system, I deleted the processor* files.

I finally got everything working correctly, changed startTime to 40000s, changed endTime to 60000s, and kept all other parameters the same. However, after about 3 or 4 time steps, the Courant number started increasing exponentially at an extremely rapid pace.

I am very confused by what has happened here, especially considering that the flow should already be quasi-steady. Do I need to run mapFields to keep this from happening? Is "startFrom latestTime;" required (I kept "startFrom startTime;")?

Any help is much appreciated.

EDIT:
From the solver output, for some reason the Courant number is starting out > 2.
Using "startFrom latestTime;" did not resolve this.

Also, in case it matters, I fixed the PBS problem by changing:
"mpirun -np 128 ico_DNS -parallel"
to
"mpirun -x PATH -x LD_LIBRARY_PATH -x WM_PROJECT_DIR -x WM_PROJECT_INST_DIR -x WM_OPTIONS -x FOAM_LIBBIN -x FOAM_APPBIN -x FOAM_USER_APPBIN -x MPI_BUFFER_SIZE --hostfile $PBS_NODEFILE -np 128 ico_DNS -parallel"
This was the only way I could get the PBS system to pass all necessary variables to nodes 2, 3, and 4. Any suggestions on this are also more than welcome!

Blanco September 3, 2015 01:34

Hi,
What write precision are you using? I remember a similar problem I've got for a steady state analysis, the problem was I was writing in ascii 12 digit precision while everything was ok when using binary instead of ascii.

fatirishman53 September 3, 2015 02:01

I am using writePrecision = 6 and writeFormat = ascii. However, I just finished a similar analysis with the same parameters but less mesh density and I had no problems starting and stopping the simulation.:confused:

jherb September 3, 2015 05:24

Have you tried binary as writeFormat?

fatirishman53 September 4, 2015 00:16

Quote:

Originally Posted by jherb (Post 562281)
Have you tried binary as writeFormat?

I have not. Unfortunately, getting to 40000s took 4 days and it was written in ASCII (I only wrote the final time step).

I am certainly not saying that writing in binary format wouldn't resolve the problem, but ASCII values with a 6 digit precision shouldn't contain enough roundoff error to cause the Courant number to more than double upon restart. For that to happen velocity would have to double, the cell size would have to be halved, or the time step would have to double. I suppose the key word here is "shouldn't".

Due to this mishap, I was forced to re-start the simulation form 0s (though, I did save the old data in case a resolution surfaced.) I will try to repeat this problem once it is done. I am really hoping it was just something being translated incorrectly after changing the distribution of processes among the PBS system. If I do get this problem to resurface, I will run the simulation again, writing the data as binary and see if that resolves the problem.

EDIT:
I remembered that I frequently run into an issue with a stray field value being written with an "a" instead of an "e" (i.e. 1.000a-05 rather than 1.000e-05). This causes an error upon restart of the simulation. Could this be a bug when writing ASCII format?

If it helps, I am running OpenFOAM 2.3 on a CentOS cluster system.


All times are GMT -4. The time now is 11:41.