CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Installation (https://www.cfd-online.com/Forums/openfoam-installation/)
-   -   [OpenFOAM.org] OpenFOAM Cluster Setup for Beginners (https://www.cfd-online.com/Forums/openfoam-installation/141392-openfoam-cluster-setup-beginners.html)

Ruli September 7, 2014 05:21

OpenFOAM Cluster Setup for Beginners
 
Dear Foamers,

I have a new question for you. I would like to set up a small cluster for OpenFOAM simulations. I do know Ubuntu basics and how to install and run on a single PC, but I have no experience in cluster setup. Therefore I hope for help. If this thread should get a big general feedback, I could imagine extracting some kind of step-for-step "Beginners guide to server setup for OpenFOAM simulations" out of it, which I unfortunately did not find in the internet.

Some information about the hardware:
- 8 Cluster blades
- 1 Xeon Quad Core per blade
- 12 GB RAM per blade
- Gigabit Ethernet Switch, 1 Gb/s

I would like to install:
- Ubuntu 12.04 LTS
- OpenFOAM 2.3.0 Dep Pack

At this point, I have a lot of questions (in the following marked with numbers).
1.) How is the general procedure of Ubuntu cluster setup?
I imagine the following steps:
A) Set up one blade normally (install Ubuntu + OpenFOAM)
B) Clone the installed blade on all the other blades
2.) Do I really have to install OF on all blades or only on the master node?
C) Physically connect the cluster blades
D) Set up the general network in Ubuntu
3.) How does network setup in Ubuntu work? Do I have to use certain software/settings?
E) Install control software (I read about TORQUE for example and I already used qsub) on master node
4.) Which control software for parallel computing is recommended?
F) Change OF settings for parallel computing
5.) What exactly do I have to do here?
G) Voilá, it works and he used the OF cluster happily ever after...
6.) Did I forget anything?
I thank all you guys thousand times for any help!

Best regards
Julian

linnemann September 7, 2014 07:53

Hi

I highly urge you to use this instead of rolling your own setup.

http://www.rocksclusters.org/wordpress/

It has all the tools you need to get up and running and managing all the machines etc.

When the hardware is installed I can have it up and running in 4-6 hours using Rocks.

Also its based on CentOS which is derived from RedHat so stability is impeccable.

Their documentation is great and have a large community to help.

Ruli September 8, 2014 09:04

Dear Niels,

thanks for the reply. I looked though the manual and it seems managable.

Anyhow, I also found a detailed for a Beowulf cluster. Thus, I have new questions:
2.1) Is there a relevant difference between a Rocks Cluster and a Beowulf Cluster?
2.2) If yes, which is to prefere?
Best regards
Julian

linnemann September 8, 2014 09:12

Short answer

No, http://en.wikipedia.org/wiki/Beowulf_cluster

Quote:

No particular piece of software defines a cluster as a Beowulf
So a Beowulf cluster is just a description of what Rocks Cluster does automatically for you.

Ruli September 8, 2014 10:25

Dear Niels,

thanks again!

I read through the Rocks manual and identified the following basic steps:

A) Set up master node (Rocks CDs for OS + addons,...)
B) Set up slave nodes one by one (Rocks CDs for OS + addons,...)
C) Install OpenFOAM on master
D) Modify OpenFOAM

Does this sound about right?

Best regards
Julian

linnemann September 8, 2014 13:36

Hi

yes but in B use pxe boot instead of CD's its much easier.

maem93 July 21, 2016 21:25

Hi Ruli and Niels,

I recently installed OpenFOAM in a Rocks Cluster 6.2, but I cannt run a parallel simulation using all the nodes. Do you know what kind of modifications I need to do in OpenFOAM to run a parallel simulation using all the nodes?

Thank you.

derekm July 22, 2016 04:14

see the discussion on this thread it contains an example.
http://www.cfd-online.com/Forums/ope...dless-ssh.html

note with current openmpi you need to make the passwordless ssh node to every node rather than headnode to slave slave to headnode.

with 32 cores on 8 node you probably should go infiniband + Gbe rather than just Gbe

Derek


All times are GMT -4. The time now is 02:29.