Hi All, Whenever I stop a run
Whenever I stop a running case and restart it, my pressure probe signals get a discontinuity...
Why is that? Is there any way around it?
Another thing, it seems to me that whenever I restart a case, I need to use smaller timesteps, eventough foam stores the previous information (from n-1 step). If I dont do this, sometimes I get divergence... Does anyone have the same impression, or know why is that?
Thanks a lot
If you want accurate restart,
If you want accurate restart, you need to store your data accurately on disk, which typically means binary. Are you storing your data ascii or binary?
Ascii. Where do I set it to b
Where do I set it to binary?
Thanks a lot.
Read the manual: system/contro
Read the manual: system/controlDict
Oops... Sorry for the stupid q
Oops... Sorry for the stupid question...
I tried that, but still get discontinuities... By discontinuities I mean a peak, in other words, it does not follow the behaviour of the last run, but pressure goes skyhigh and takes a few iter to stabilize and only then start following the last run tendency...
Anything else that might be causing it?
Is that something expected from numerical schemes that my ignorance does not allow me to recognize?
Some questions you can investi
Some questions you can investigate:
Is this icoFoam/turbFoam? Moving mesh?
Does it depend on
- time differencing (Euler implicit?)
- turbulence model / no turbulence model
- type of boundary conditions
Do you have a flux phi stored? This will be reread if present and should be consistent with the pressure. Try restarting without it.
Is your pressure probe on/next to a boundary?
Are there any unitialized variables? Run it through valgrind.
Hi Mattijs and Luis Eduardo
Hi Mattijs and Luis Eduardo
I also have some troubles with restarting simulations. If I try to restart my simulation since FoamX all field begins with initial condition. I need change in the right Dictionary file to sign that I want a rerun (initial time seted up to last time, for example) and not run FoamX.
Somebody can help me how can I use FoamX to make the right thing in order to restart my calculation using FoamX?
Tanks in advance for all
Hi Wladimyr, Problem is tha
Problem is that FoamX itself uses the settings from the controlDict.
So any settings in the controlDict controlling the writing of the case have to be set before writing your simulation.
Just edit the controlDict by hand and change the startTime to latestTime and try FoamX again.
(better try all this on a small case)
serial converges - parallel di
serial converges - parallel diverges
I faced that same problem so, I tried to run the tutorial case channelOodles first as sereial and then as parallel on two processors.
the rest of the setup is as the tutorial.
I use Version 1.2
the serial case converges while the parallel one diverge. I noticed the CFL is very different in both cases during the simulation. I run in ascii, as the default of the tutorial. Can any one help explain, why? Thanks.
I attach decomposeParDict, see
I attach decomposeParDict, see below.
I do not use FoamX.
n (2 1 1);
n (1 1 1);
// Path of decomposition data file
I have seen that same in a modified solver similar to channelOodles when I restart a simulation from parallel pervious steps to the same parallel setup, the CFL goes high. At the moment I would rather start with why this big difference between serial and parallel run? I'm aware that serial and parallel runs may not give the same solutions exactly. Thanks in advance.
Use version 1.3. Use standard
Use version 1.3. Use standard channelOodles. Check that phi gets written. Report bug if there is a problem.
Try using the GaussSeidel solv
Try using the GaussSeidel solver instead of BICCG, i.e.
U BICCG 1e-6 0;
U GaussSeidel 1e-6 0 1;
The change tends to improve robustness during startup and on some parallel meshes.
Eugene, Regarding: U Gau
U GaussSeidel 1e-6 0 1;
That last number makes my eyes hurt: it says how many Gauss-Seidel sweeps I whould do before checking the residual. Basically, evaluation of the residual costs about the same as the Gauss-Seidel sweep, so you should avoid checking too often. I suspect you will do at least 2-3 G-S sweeps to reach the tolerance so please change the number to something between 3 and 5:
U GaussSeidel 1e-6 0 5;
Of course if it does more than
Of course if it does more than one sweep you should decrease the check frequency. In my experience it needs only 1 or sometimes two sweeps to converge. Then again, I was running steady state, so adjust accordingly.
I retested the case above with
I retested the case above with V 1.3 instead of 1.2:
- I used standard channelOodles.
- phi is written
- I test with both backward time diff. (the default) and CrankNicholson.
both serial and prallel restart cases give aprox. the same result.
It seems the what happened before was due to using V 1.2.
do you think I should go and try if using the GaussSeidel on V 1.2 will solve that problem or you have some reason that it was a bug in V 1.2 that was solved in V 1.3?
Thanks for your help
Hi everybody, I've a questi
I've a question for all, my run stop for a memory reason. I clean everything and know, I want to restart my run for my last save.
so I changed the controlDict files but, starting the run, I got this message:
Reading field p
--> FOAM FATAL ERROR : Attempt to cast type patch to type cyclic
From function refCast<to>(From&)
in file /home/dm2/henry/OpenFOAM/OpenFOAM-1.3/src/OpenFOAM/lnInclude/typeInfo.H at line 103.
so, I'm not sure f the reason. My saved format is ascii but, .... if I don't want to restart from 0, how can I do ?
I used oodles and cyclic patch (everything worked fine before my stop)
I used couplePatches because of cyclic Patch but my geometry is the same so ...:
Mesh has no coupled patches. Nothing changed ...
hi, problem is over just ha
problem is over
just have to change all the cyclic in patch ....
well, just have to read the OF's message.
sorry for disturbing you
Hello everybody, I have a sma
I have a small (I hope) problem, similar to the one that Maka had: the computation works serial but diverges in parallel.
Now, few details: I'm running MRFSimpleFoam solver with the same configuration as in mixerVessel2D. My geometry consists in a quadrilateral prism rotating inside a cube. So it is a full 3D geometry with no empty patches.
As a small comparison, I ran mixerVessel2D in serial and paralle on two processors.
As seen above, the continuity error in parallel follows very close the one in serial.
When I tried the same thing for my geometry, the parallel computation follows the serial up to ~80 iterations as seen below.
In order to get a solution, I had to decrease the relatve tolerances in system/fvSolution.
Another question is the memory footprint. Both cases show the same behaviour: the requested amount of memory for parallel computation is much higher than for the serial one.
MRF mixerVessel2D serial
28047 dragos 25 0 63060 14m 9968 R 99 0.7 0:10.01 MRFSimpleFoam
MRF mixerVessel2D parallel
28170 dragos 16 0 304m 15m 11m R 56 0.8 0:01.69 MRFSimpleFoam
28169 dragos 16 0 304m 15m 11m R 56 0.8 0:01.67 MRFSimpleFoam
MRF prism serial
28176 dragos 18 0 67064 18m 9980 R 100 0.9 0:02.35 MRFSimpleFoam
MRF prism parallel
28150 dragos 25 0 306m 17m 11m R 100 0.9 0:04.77 MRFSimpleFoam
28149 dragos 21 0 306m 17m 11m R 100 0.9 0:04.76 MRFSimpleFoam
As shown above (copy/paste from top), it seems that the memory request per processor doesn't decrease, but on the contrary, it increases (16MB -> 304MB).
Anyone wants to comment on these: why different behaviour in parallel than in serial (convergence and memory requirement)?
There will be some overhead in
There will be some overhead in parallel (processor patches&fields) but nowhere near that much. What is your $MPI_BUFFER_SIZE set to? Is the memory resident (use e.g. 'top' command) or just virtual? Does the memory needed increase infinitely while running? What mpi?
Hi Mattijs, I'm using the def
I'm using the default openmpi that comes with OpenFOAM-1.4.1, and the value of $MPI_BUFFER_SIZE is also the default one: 20000000
The lines above are already from "top", and the memory request is constant over the entire computation.
|All times are GMT -4. The time now is 19:04.|