|
[Sponsors] |
July 14, 2008, 12:38 |
Hi,
I have 3 days of OpenFOAM
|
#1 |
Member
Martin Aunskjaer
Join Date: Mar 2009
Location: Denmark
Posts: 53
Rep Power: 17 |
Hi,
I have 3 days of OpenFOAM experience under the belt and have worked up the currage to attempt a parallel run. Having found hints in the forum for replacing OpenMPI with other MPI implementations, I now have OpenFOAM running in parallel using MPICH2. My question is, exactly what information needs to be present on every node participating in the run ? This, I believe, is a valid question if ones OpenFOAM installation is not shared via NFS. My brute force approach was to build what I needed on one host and scp the whole installation to the other nodes, i.e. everything under /home/me/OpenFOAM/. This means every time I rebuild e.g. a shared lib on one node I need to remember to copy it to the remaining nodes of the cluster. Also, this procedure means the directory structure will be the same on all nodes, which might not be what I want. So what is the best way to ensure that: 1) you have the correct data on every node 2) data is easily updated on all nodes if changed on one. /Martin |
|
July 14, 2008, 13:31 |
Unfortunately, the best practi
|
#2 |
Senior Member
Mark Olesen
Join Date: Mar 2009
Location: https://olesenm.github.io/
Posts: 1,679
Rep Power: 40 |
Unfortunately, the best practice would be to have an NFS-mount for the OpenFOAM installation (and gcc libs). You can use "ldd -v" to determine which libraries are really needed by your application, but simply synchronizing all the files will give you fewer problems in the long run.
Since you don't have NFS, you can at least use 'rsync' instead of 'scp' to reduce the amount of unnecessary copying. As for the calculation case itself, it is possible to have different roots for each host, but I haven't done this myself. BTW: you might also find 'pdsh' useful for your environment. |
|
July 14, 2008, 13:50 |
For different roots, add the f
|
#3 |
Senior Member
Eugene de Villiers
Join Date: Mar 2009
Posts: 725
Rep Power: 21 |
For different roots, add the following to your decomposeParDict:
distributed yes; roots ( -list of case roots for each machine- ); |
|
July 15, 2008, 01:36 |
Thanks for your answers. This
|
#4 |
Member
Martin Aunskjaer
Join Date: Mar 2009
Location: Denmark
Posts: 53
Rep Power: 17 |
Thanks for your answers. This is very good stuff for me.
|
|
September 5, 2008, 07:11 |
Hi Eugene,
can you show me
|
#5 |
New Member
Michael Schroeter
Join Date: Mar 2009
Location: Germany
Posts: 4
Rep Power: 17 |
Hi Eugene,
can you show me an example that works correctly? I tried this (OF 1.4.1): ... distributed yes; roots 2 ( "/home/me_on_disk1/Root_One" "/home/me_in_disk2/Root_Two" ); But after calling "blockMesh", "decomposePar" and "mysolver" it ignores my settings. "mysolver" tries to fetch data from the "rootdir" given in the solver call: "mysolver <root> case" Michael |
|
September 8, 2008, 09:12 |
I might add, that in 1.4.1 the
|
#6 |
Senior Member
Eugene de Villiers
Join Date: Mar 2009
Posts: 725
Rep Power: 21 |
I might add, that in 1.4.1 the first process will always use the command line root. The second, third and so on processes will use the roots specified in the decomposeParDict, starting from the first. (so processor1 will use Root_one). This is a known bug and you still need to specify the same number of roots as there are processors otherwise the process will fail. The last root will simply not be used at all.
Other than this, what you posted should work. |
|
September 9, 2008, 05:10 |
Great! It works :-)
Thanx a
|
#7 |
New Member
Michael Schroeter
Join Date: Mar 2009
Location: Germany
Posts: 4
Rep Power: 17 |
Great! It works :-)
Thanx a lot. |
|
Thread Tools | Search this Thread |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
2D Airfoil Best Practice | IJB | Main CFD Forum | 2 | October 30, 2007 05:47 |
practice | mohammadi | Main CFD Forum | 0 | December 21, 2004 07:46 |
who has ERCOFTAC Best Practice Guidelines ? | sarah_ron | FLUENT | 1 | September 27, 2004 08:17 |
best practice guidelines | novice | Main CFD Forum | 4 | August 12, 2004 12:52 |
CFD Qualifications/Best practice | Phil | Main CFD Forum | 4 | April 16, 2004 02:16 |