CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Running, Solving & CFD

Best practice for parallel executions

Register Blogs Members List Search Today's Posts Mark Forums Read

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   July 14, 2008, 12:38
Default Hi, I have 3 days of OpenFOAM
  #1
Member
 
Martin Aunskjaer
Join Date: Mar 2009
Location: Denmark
Posts: 53
Rep Power: 17
aunola is on a distinguished road
Hi,
I have 3 days of OpenFOAM experience under the belt and have worked up the currage to attempt a parallel run.

Having found hints in the forum for replacing OpenMPI with other MPI implementations, I now have OpenFOAM running in parallel using MPICH2. My question is, exactly what information needs to be present on every node participating in the run ? This, I believe, is a valid question if ones OpenFOAM installation is not shared via NFS.

My brute force approach was to build what I needed on one host and scp the whole installation to the other nodes, i.e. everything under /home/me/OpenFOAM/. This means every time I rebuild e.g. a shared lib on one node I need to remember to copy it to the remaining nodes of the cluster. Also, this procedure means the directory structure will be the same on all nodes, which might not be what I want.

So what is the best way to ensure that:
1) you have the correct data on every node
2) data is easily updated on all nodes if changed on one.

/Martin
aunola is offline   Reply With Quote

Old   July 14, 2008, 13:31
Default Unfortunately, the best practi
  #2
Senior Member
 
Mark Olesen
Join Date: Mar 2009
Location: https://olesenm.github.io/
Posts: 1,679
Rep Power: 40
olesen has a spectacular aura aboutolesen has a spectacular aura about
Unfortunately, the best practice would be to have an NFS-mount for the OpenFOAM installation (and gcc libs). You can use "ldd -v" to determine which libraries are really needed by your application, but simply synchronizing all the files will give you fewer problems in the long run.

Since you don't have NFS, you can at least use 'rsync' instead of 'scp' to reduce the amount of unnecessary copying. As for the calculation case itself, it is possible to have different roots for each host, but I haven't done this myself.

BTW: you might also find 'pdsh' useful for your environment.
olesen is offline   Reply With Quote

Old   July 14, 2008, 13:50
Default For different roots, add the f
  #3
Senior Member
 
Eugene de Villiers
Join Date: Mar 2009
Posts: 725
Rep Power: 21
eugene is on a distinguished road
For different roots, add the following to your decomposeParDict:

distributed yes;

roots
(
-list of case roots for each machine-
);
eugene is offline   Reply With Quote

Old   July 15, 2008, 01:36
Default Thanks for your answers. This
  #4
Member
 
Martin Aunskjaer
Join Date: Mar 2009
Location: Denmark
Posts: 53
Rep Power: 17
aunola is on a distinguished road
Thanks for your answers. This is very good stuff for me.
aunola is offline   Reply With Quote

Old   September 5, 2008, 07:11
Default Hi Eugene, can you show me
  #5
New Member
 
Michael Schroeter
Join Date: Mar 2009
Location: Germany
Posts: 4
Rep Power: 17
misch is on a distinguished road
Hi Eugene,

can you show me an example that works
correctly? I tried this (OF 1.4.1):

...
distributed yes;

roots
2
(
"/home/me_on_disk1/Root_One"
"/home/me_in_disk2/Root_Two"
);

But after calling "blockMesh", "decomposePar"
and "mysolver" it ignores my settings. "mysolver"
tries to fetch data from the "rootdir" given
in the solver call: "mysolver <root> case"

Michael
misch is offline   Reply With Quote

Old   September 8, 2008, 09:12
Default I might add, that in 1.4.1 the
  #6
Senior Member
 
Eugene de Villiers
Join Date: Mar 2009
Posts: 725
Rep Power: 21
eugene is on a distinguished road
I might add, that in 1.4.1 the first process will always use the command line root. The second, third and so on processes will use the roots specified in the decomposeParDict, starting from the first. (so processor1 will use Root_one). This is a known bug and you still need to specify the same number of roots as there are processors otherwise the process will fail. The last root will simply not be used at all.

Other than this, what you posted should work.
eugene is offline   Reply With Quote

Old   September 9, 2008, 05:10
Default Great! It works :-) Thanx a
  #7
New Member
 
Michael Schroeter
Join Date: Mar 2009
Location: Germany
Posts: 4
Rep Power: 17
misch is on a distinguished road
Great! It works :-)

Thanx a lot.
misch is offline   Reply With Quote

Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
2D Airfoil Best Practice IJB Main CFD Forum 2 October 30, 2007 05:47
practice mohammadi Main CFD Forum 0 December 21, 2004 07:46
who has ERCOFTAC Best Practice Guidelines ? sarah_ron FLUENT 1 September 27, 2004 08:17
best practice guidelines novice Main CFD Forum 4 August 12, 2004 12:52
CFD Qualifications/Best practice Phil Main CFD Forum 4 April 16, 2004 02:16


All times are GMT -4. The time now is 06:12.