CFD Online Discussion Forums

CFD Online Discussion Forums (http://www.cfd-online.com/Forums/)
-   OpenFOAM Running, Solving & CFD (http://www.cfd-online.com/Forums/openfoam-solving/)
-   -   Memory consumption in oodles (http://www.cfd-online.com/Forums/openfoam-solving/59100-memory-consumption-oodles.html)

braennstroem April 29, 2005 01:47

Hi, I read somewhere that O
 
Hi,

I read somewhere that OpenFoam uses between 1 and 2k per cell. Does anyone know how much 'oodles' needs in particular?

I have a 3 Mio. mesh and my machine is not able to satisfy the memory request...

Regards!
Fabian

henry April 29, 2005 03:23

oodles required quite a bit be
 
oodles required quite a bit because it stores fields for averaging but if you can remove those if you don't need them. But even after that you will have difficulty fitting a 3M case into the 2Gb 32bit addressing allows (I assume you are running on a 32bit machine).

braennstroem April 29, 2005 03:46

Thanks! Is there some way to
 
Thanks!
Is there some way to check how much memory the simulation desires; maybe there there is a small linux tool?
You are right, I am running a 32bit with 2GB (isn't the max 4GB?) and was actually 'expecting' sooner or later something like this. Fluent is able to run with the same mesh and the default setup; probably without any averaging (I have to check :-) ).

Anyway, averaging will not be needed until it the flow is developed. So taking it out in the beginning will help speeding it up. Maybe, I can find the part!?

Regards!
Fabian

henry April 29, 2005 03:58

We generally use top to check
 
We generally use top to check the memory usage but ps can do the same and there is a graphical version called xosview and if you are feeling adventurous try ksysguard.

oodles is not as memory efficient as it might be, you may be able to avoid the storage of some temporaries to get it to fit into 2Gb. You are probably also running backward-differencing in time which requires the storage of old-old-time fields which add to the overhead of running LES.

Yes in principle a 32bit machine can address 4GB but the default kernel/OS cannot due to the way in which the memory allocator operates. I understand that some people are working on a linux kernel (and operating system patches) that can address 4Gb but I have no idea if it is available yet and if it works OK.

JBeilke April 29, 2005 04:13

Does Foam use double precision
 
Does Foam use double precision? This might be the reason why I can run much bigger cases with Star on the same machine. Is there a way to use single precision?

henry April 29, 2005 04:19

Yes OpenFOAM uses double preci
 
Yes OpenFOAM uses double precision because for many problems like LES it's necessary and memory is cheap these days. I have run an old version of FOAM single precision many years ago and it may be possible to run OpenFOAM single precision also but I have never attempted it. If you think it is a good idea for you and you want to try it edit scalar.H which includes the code for setting double and for float commented out and then recompile everything.

braennstroem April 29, 2005 06:12

Thanks! I actually thought ab
 
Thanks!
I actually thought about a tool which predicts the memory consumption before starting, just by scanning the needed files, but this was a stupid idea.

eugene April 29, 2005 06:31

IIRC you can get the storage d
 
IIRC you can get the storage down to ~1.1k per cell by removing all averaging and using Euler implicit time stepping in oodles. This might just run on a 32-bit machine, but it will be excruciatingly slow (Like really really really slow).

melanie May 9, 2006 09:33

Hello, I am running transie
 
Hello,

I am running transient LES on a small 3D case (~800 000 cells) with periodicity in the width with the solver oodles. The job is parallelized on 2 CPUs.

Everything is working well (results are OK, as well as CFL always < 0.05, residuals OK...), except that the calculation stopped two times wihtout error messages.

I understood by reading the error messages on the machine itself at corresponding times that the machine actually get out of memory each time. After that, I checked my job with top, and noticed that the virtual memory consumption is increasing as the job is running. I guess that this is the reason for the crash. The machine is running under Linux, OF 1.2, 2Gb RAM (including the system requirements) + 10 Gb reserved for swap.

Is there something I can do to avoid such a memory consumption increase during the computation ? this is very bad, because everytime it crashes, I lose days of computation...

Thank you !

eugene May 9, 2006 11:05

Is this a standard or modified
 
Is this a standard or modified solver? What kind of mesh are you using?
The standard oodles solver with all the averaging should use no more than ~1.6 GB for 800k hex cells.

I have run oodles under 1.2 a lot and never experienced a memory leak.

melanie May 9, 2006 11:17

Thanks Eugene. The solver is t
 
Thanks Eugene. The solver is the standard one, my mesh is non-uniform hex with some boundary layers near the walls.

For example, I re-ran the case from the latestTime this morning; it took ~500 Mb residual memory and ~700 Mb virtual memory. Now, after 5-6 hours and 360 time steps, the residual memory is nearly the same, but virtual memory has increased to more than 1Gb.

eugene May 9, 2006 12:05

Resident not residual memory.
 
Resident not residual memory.

All I can recommend is to install OF 1.3 and see if it makes any difference. If you are familiar with the valgrind tool suite use that to check for memory leaks.

Unfortunately, your problem is not something I have encountered before using a release version of OF, so I'm not sure where to start.

melanie May 10, 2006 04:22

I will do some tests to check
 
I will do some tests to check if the problem also appears on 1 CPU.

Moreover, I actually had made a little change to the solver oodles, I added the vorticity and vorticity magnitude calculation in order to be recorded via the probes utility; I don't think this could be responsible for memory leaks... I'll do the test also with the original solver.

melanie May 11, 2006 08:17

Hi, I did the tests, and I
 
Hi,

I did the tests, and I could reproduce the memory increase with the original solver, in parallel as well as in serial computations.
I will try to better monitor the case to find out what's happening...

melanie May 11, 2006 11:01

I also tried the valgrind tool
 
I also tried the valgrind tool, but it says that it cannot handle 32-bit executables... (valgrind --tool=memcheck oodles)
What could I do now ?

melanie May 12, 2006 03:20

sorry for the mistake: valgrin
 
sorry for the mistake: valgrind can ONLY handle 32-bit executables...

eugene May 12, 2006 05:27

Try OF 1.3
 
Try OF 1.3

mattijs May 15, 2006 14:25

I cannot reproduce your proble
 
I cannot reproduce your problem. I ran the pitzDaily tutorial case for about half an hour but see no increase in memory usage. This is using OF 1.3.

Do you still see your problem in OF 1.3?

anne May 16, 2006 04:07

Hello, I would like also to
 
Hello,

I would like also to communicate that I have found the same kind of trouble as Melania.

I am not using ooles but a a slightly modifed sovler
issued from channelOoles.

I run it without any memory problem
on a grid composed of 80*64*64, but
once I run it a finer grid (100*100*64) over a long time, it fels aleatory and I get the following message (see the ending, I just let
the informations at the time step to let you see that the computation is OK):

--------------------------------------------
Time = 71.168

Mean and max Courant Numbers = 0.0215512 0.0991229
BICCG: Solving for Ux, Initial residual = 0.00192937, Final residual = 2.94006e-09, No Iterations 3
BICCG: Solving for Uy, Initial residual = 0.0105643, Final residual = 1.66613e-08, No Iterations 3
BICCG: Solving for Uz, Initial residual = 0.00457225, Final residual = 7.12019e-09, No Iterations 3
AMG: Solving for p, Initial residual = 0.0303494, Final residual = 7.69615e-08, No Iterations 29
time step continuity errors : sum local = 2.67926e-12, global = -2.26713e-20, cumulative = -7.9221e-17
AMG: Solving for p, Initial residual = 0.00121754, Final residual = 7.72753e-08, No Iterations 17
time step continuity errors : sum local = 2.69067e-12, global = -1.82503e-20, cumulative = -7.92392e-17
Uncorrected Ubar = 1 pressure gradient = 0.00345395
new cannot satisfy memory request.
This does not necessarily mean you have run out of virtual memory.
It could be due to a stack violation caused by e.g. bad use of pointers or an out of date shared library

--------------------------------------------
Anne

mattijs May 16, 2006 04:58

It looks like a memory problem
 
It looks like a memory problem indeed. It would be very helpful if you could track this down to:
- OpenFOAM 1.3
- an unmodified solver
- a tutorial testcase or another case which uses blockMesh to generate the mesh.


All times are GMT -4. The time now is 13:03.