CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Hardware

How much RAM for a cluster @ big output-files?

Register Blogs Members List Search Today's Posts Mark Forums Read

Reply
 
LinkBack Thread Tools Display Modes
Old   December 6, 2011, 05:28
Default How much RAM for a cluster @ big output-files?
  #1
New Member
 
Eike
Join Date: Sep 2011
Posts: 18
Rep Power: 5
Eike is on a distinguished road
Hi,

I'm using a Dell T7500 Workstation with two 3,6GHz Intel Quad Cores and 48 GB Ram (will upgrade to 96GB), using STAR-CCM+ 6.06.011. The Cluster is a 16-node Dell with two Intel 6-core CPUs and 24GB of RAM each node.

My today's simulation has 12M cells using 30GB of RAM after the initialisation on my Workstation. The output-file at a solution time of 0.1s (time step 0.001s) is something about 50GB ... ist takes several hours for saving the file after the batch run. Even saving much smaller output-files takes a long time. Is 24GB of RAM OK for the headnode, or would an upgrade decrease the time for saving output-files?

Best wishes
Eike
Eike is offline   Reply With Quote

Old   December 7, 2011, 11:31
Default
  #2
Senior Member
 
Join Date: Mar 2009
Location: Austin, TX
Posts: 134
Rep Power: 9
kyle is on a distinguished road
The amount of RAM that you have has very little to do with how long it takes to save. Network speed and hard disk speed are the bottlenecks for saving parallel runs.

Are you saving the entire flow field for every single timestep to a single *.CCM file? If you are, then you might want to try saving to a different *.CCM file every time. This way you don't have to read in and save the entire flowfield history every time you want to output.

Additionally, frequent saves of the entire flowfield doesn't fit well with the Star-CCM+ "philosophy". Your post processing should be done during the simulation. Set up your scenes to output the pictures during the run and set up monitors to log the time history of whatever variables you want to track.
kyle is offline   Reply With Quote

Old   December 7, 2011, 18:23
Default
  #3
Member
 
Robert
Join Date: Jun 2010
Posts: 86
Rep Power: 8
RobertB is on a distinguished road
The philosophy of knowing up front what figures you will require is, at best, flawed.

I appreciate there are areas where this is the case but in design optimization of complicated devices this is typically not the case.

Would it be acceptable for a steady state run to have only a few pictures/values that you set ahead of the analysis and could not change without fully rerunning the case again once you realized how the physics of the new design actually operated?

I appreciate that with transients it is hard due to the data involved, one reason why STAR allows you to save transient result files with partial data. In this case could you export a subset of data to minimize file size?

Good Luck.
RobertB is offline   Reply With Quote

Old   December 7, 2011, 19:25
Default
  #4
Senior Member
 
Join Date: Mar 2009
Location: Austin, TX
Posts: 134
Rep Power: 9
kyle is on a distinguished road
Quote:
Originally Posted by RobertB View Post
The philosophy of knowing up front what figures you will require is, at best, flawed.
.
I'm right there with you buddy. I cannot stand the design of Star-CCM+.
kyle is offline   Reply With Quote

Old   December 8, 2011, 02:25
Default
  #5
New Member
 
Eike
Join Date: Sep 2011
Posts: 18
Rep Power: 5
Eike is on a distinguished road
Quote:
Set up your scenes to output the pictures during the run and set up monitors to log the time history of whatever variables you want to track.
That is what I've done ...

There is no autosave during the simulation. Let's say my file is named test.sim and runs 1000 iterations. The outputfile is named test@01000.sim and is saved at the same directory when all the calculatons finished.

My workstation is used for pre and post. After the setup I copy the sim-file to the cluster. The cluster has an infiniband network and 20TB of RAID-disk space. Copying a file form disk to disk on the cluster, the disk speed is just a little bit below 1GB/second. So I think there is no bottle neck causing 7 hours for saving 107GB (my yesterday's file).
Eike is offline   Reply With Quote

Old   December 8, 2011, 08:46
Default
  #6
Member
 
Robert
Join Date: Jun 2010
Posts: 86
Rep Power: 8
RobertB is on a distinguished road
I have to admit that saves of big files (for me that is ~25-30GB) have seemed very slow. I assumed it was something to do with Linux cache sizes/our disk set up/... but it still seemed slow given how fast disks should be.

I wonder if it is either copying the files locally and then reading and writing off the same disk or if there is a somewhat brain dead sorting algorithm in the code that only really dies at very high data sizes.
RobertB is offline   Reply With Quote

Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
critical error during installation of openfoam Fabio88 OpenFOAM Installation 21 June 2, 2010 03:01
Problems in compiling paraview in Suse 10.3 platform chiven OpenFOAM Installation 3 December 1, 2009 08:21
Help: how to realize UDF on parallel cluster? Haoyin FLUENT 1 August 6, 2007 13:53
Can FLUENT run under Linux with 2 Gb of RAM? Paul Gregory FLUENT 0 February 13, 2001 21:10
Merging .msh files in TGrid Raza Mirza FLUENT 2 January 18, 2001 19:09


All times are GMT -4. The time now is 11:57.