|
[Sponsors] |
September 2, 2015, 22:15 |
Simulation "Blows-Up" After Restart
|
#1 |
Member
Matt
Join Date: Oct 2012
Posts: 39
Rep Power: 13 |
I am running a DNS simulation with a modified version of the icoFoam solver that solves the Navier Stokes equation based off of a constant pressure gradient provided within the transportProperties file. (This was necessary due to cyclic boundary conditions and the need to let the velocity profile develop on its own)
The simulation ran fine from 0s to 40000s. Originally I had the simulation set to end at 60000s, but I found out that the PBS system had split 1 node into 128 processes instead of splitting 4 nodes into 32 processes each. So, I changed the endTime to 40000s during run time. Everything ended normally and the simulation was reconstructed. At 40000 the Courant number had reached about 0.8. Though it was increasing, it was doing so very slowly and should have finished the full 60000s still < 1. During troubleshooting of the PBS system, I deleted the processor* files. I finally got everything working correctly, changed startTime to 40000s, changed endTime to 60000s, and kept all other parameters the same. However, after about 3 or 4 time steps, the Courant number started increasing exponentially at an extremely rapid pace. I am very confused by what has happened here, especially considering that the flow should already be quasi-steady. Do I need to run mapFields to keep this from happening? Is "startFrom latestTime;" required (I kept "startFrom startTime;")? Any help is much appreciated. EDIT: From the solver output, for some reason the Courant number is starting out > 2. Using "startFrom latestTime;" did not resolve this. Also, in case it matters, I fixed the PBS problem by changing: "mpirun -np 128 ico_DNS -parallel" to "mpirun -x PATH -x LD_LIBRARY_PATH -x WM_PROJECT_DIR -x WM_PROJECT_INST_DIR -x WM_OPTIONS -x FOAM_LIBBIN -x FOAM_APPBIN -x FOAM_USER_APPBIN -x MPI_BUFFER_SIZE --hostfile $PBS_NODEFILE -np 128 ico_DNS -parallel" This was the only way I could get the PBS system to pass all necessary variables to nodes 2, 3, and 4. Any suggestions on this are also more than welcome! Last edited by fatirishman53; September 2, 2015 at 22:25. Reason: New Information |
|
September 3, 2015, 01:34 |
|
#2 |
Senior Member
Blanco
Join Date: Mar 2009
Location: Torino, Italy
Posts: 193
Rep Power: 17 |
Hi,
What write precision are you using? I remember a similar problem I've got for a steady state analysis, the problem was I was writing in ascii 12 digit precision while everything was ok when using binary instead of ascii. |
|
September 3, 2015, 02:01 |
|
#3 |
Member
Matt
Join Date: Oct 2012
Posts: 39
Rep Power: 13 |
I am using writePrecision = 6 and writeFormat = ascii. However, I just finished a similar analysis with the same parameters but less mesh density and I had no problems starting and stopping the simulation.
|
|
September 3, 2015, 05:24 |
|
#4 |
Senior Member
Joachim Herb
Join Date: Sep 2010
Posts: 650
Rep Power: 21 |
Have you tried binary as writeFormat?
|
|
September 4, 2015, 00:16 |
|
#5 |
Member
Matt
Join Date: Oct 2012
Posts: 39
Rep Power: 13 |
I have not. Unfortunately, getting to 40000s took 4 days and it was written in ASCII (I only wrote the final time step).
I am certainly not saying that writing in binary format wouldn't resolve the problem, but ASCII values with a 6 digit precision shouldn't contain enough roundoff error to cause the Courant number to more than double upon restart. For that to happen velocity would have to double, the cell size would have to be halved, or the time step would have to double. I suppose the key word here is "shouldn't". Due to this mishap, I was forced to re-start the simulation form 0s (though, I did save the old data in case a resolution surfaced.) I will try to repeat this problem once it is done. I am really hoping it was just something being translated incorrectly after changing the distribution of processes among the PBS system. If I do get this problem to resurface, I will run the simulation again, writing the data as binary and see if that resolves the problem. EDIT: I remembered that I frequently run into an issue with a stray field value being written with an "a" instead of an "e" (i.e. 1.000a-05 rather than 1.000e-05). This causes an error upon restart of the simulation. Could this be a bug when writing ASCII format? If it helps, I am running OpenFOAM 2.3 on a CentOS cluster system. Last edited by fatirishman53; September 4, 2015 at 01:23. Reason: Another thought |
|
Thread Tools | Search this Thread |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Simulation of high pressure diesel injector - all phases compressible with cavitation | fivos | CFX | 4 | July 30, 2015 06:48 |
Huge file sizes when Running VOF simulation | aarratia | FLUENT | 0 | May 8, 2014 12:27 |
restart simulation | darookie | CFX | 8 | January 14, 2013 02:18 |
Error when restart simulation | zebu83 | OpenFOAM | 0 | October 20, 2009 04:30 |
3-D Contaminant Dispersal Simulation | Apple L S Chan | Main CFD Forum | 1 | December 23, 1998 10:06 |