CFD Online Discussion Forums

CFD Online Discussion Forums (http://www.cfd-online.com/Forums/)
-   OpenFOAM Running, Solving & CFD (http://www.cfd-online.com/Forums/openfoam-solving/)
-   -   Best practice for parallel executions (http://www.cfd-online.com/Forums/openfoam-solving/58570-best-practice-parallel-executions.html)

aunola July 14, 2008 11:38

Hi, I have 3 days of OpenFOAM
 
Hi,
I have 3 days of OpenFOAM experience under the belt and have worked up the currage to attempt a parallel run.

Having found hints in the forum for replacing OpenMPI with other MPI implementations, I now have OpenFOAM running in parallel using MPICH2. My question is, exactly what information needs to be present on every node participating in the run ? This, I believe, is a valid question if ones OpenFOAM installation is not shared via NFS.

My brute force approach was to build what I needed on one host and scp the whole installation to the other nodes, i.e. everything under /home/me/OpenFOAM/. This means every time I rebuild e.g. a shared lib on one node I need to remember to copy it to the remaining nodes of the cluster. Also, this procedure means the directory structure will be the same on all nodes, which might not be what I want.

So what is the best way to ensure that:
1) you have the correct data on every node
2) data is easily updated on all nodes if changed on one.

/Martin

olesen July 14, 2008 12:31

Unfortunately, the best practi
 
Unfortunately, the best practice would be to have an NFS-mount for the OpenFOAM installation (and gcc libs). You can use "ldd -v" to determine which libraries are really needed by your application, but simply synchronizing all the files will give you fewer problems in the long run.

Since you don't have NFS, you can at least use 'rsync' instead of 'scp' to reduce the amount of unnecessary copying. As for the calculation case itself, it is possible to have different roots for each host, but I haven't done this myself.

BTW: you might also find 'pdsh' useful for your environment.

eugene July 14, 2008 12:50

For different roots, add the f
 
For different roots, add the following to your decomposeParDict:

distributed yes;

roots
(
-list of case roots for each machine-
);

aunola July 15, 2008 00:36

Thanks for your answers. This
 
Thanks for your answers. This is very good stuff for me.

misch September 5, 2008 06:11

Hi Eugene, can you show me
 
Hi Eugene,

can you show me an example that works
correctly? I tried this (OF 1.4.1):

...
distributed yes;

roots
2
(
"/home/me_on_disk1/Root_One"
"/home/me_in_disk2/Root_Two"
);

But after calling "blockMesh", "decomposePar"
and "mysolver" it ignores my settings. "mysolver"
tries to fetch data from the "rootdir" given
in the solver call: "mysolver <root> case"

Michael

eugene September 8, 2008 08:12

I might add, that in 1.4.1 the
 
I might add, that in 1.4.1 the first process will always use the command line root. The second, third and so on processes will use the roots specified in the decomposeParDict, starting from the first. (so processor1 will use Root_one). This is a known bug and you still need to specify the same number of roots as there are processors otherwise the process will fail. The last root will simply not be used at all.

Other than this, what you posted should work.

misch September 9, 2008 04:10

Great! It works :-) Thanx a
 
Great! It works :-)

Thanx a lot.


All times are GMT -4. The time now is 05:56.