|
[Sponsors] |
March 10, 2008, 05:48 |
Huge meshes on clusters
|
#1 |
Guest
Posts: n/a
|
Hello.
Generally speaking: if we want to run a huge case on a cluster (i.e. a distributed-memory system), do we then need to have ONE machine with a huge amount of RAM to read in the mesh, and partition it, in order to run the job? We have 10 powerful machines, each with 16G of RAM. Say we want to run a job which consumes almost all RAM on each machine. We are able to generate a very big mesh in ICEM using only 16G-20G on that particular machine, but when we read it into Fluent, a 16 G cluster-machine is nowhere near enough. Seems odd that you should need to have all that RAM available both on one host-machine AND the on all the separate client-machines. That is, one machine with 64G and then our 10 machines with 16G each. Is this really needed? We start-up Fluent in parallel mode, but it seems to want the entire mesh to reside on the host, before partitioning it. This probably makes sense, but I wonder if there is some workaround we could try, other than buying lots of (more) RAM for one of the machines. Thanks! /Mads |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
huge difference in running time | Jimmy | Main CFD Forum | 5 | July 18, 2008 22:29 |
is proam one huge bug??? | F.K. | Siemens | 3 | July 28, 2004 03:06 |
why the pstt file is so huge? | Lee | Siemens | 2 | September 17, 2003 17:26 |
building 3D geometry from a huge list of vertices | S.S. | Main CFD Forum | 3 | July 29, 2003 03:55 |
How to extract streamlines from huge files? | Markus Weber | Main CFD Forum | 3 | August 3, 2000 09:28 |