CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums

Running OpenFOAM in parallel with different locations for each process

Register Blogs Members List Search Today's Posts Mark Forums Read

Rate this Entry

Running OpenFOAM in parallel with different locations for each process

Posted December 15, 2011 at 17:56 by wyldckat

The other day a fellow forum user asked me for help about a problem dealing with the specific configuration of a case for running in a cluster. The specific configuration was simple: each slave machine of the cluster has it's own independent storage address on a similar path.
Below is an edited version of the reply I sent him.


OK, first detail - In the following file you have a (sort-of) well documented "decomposeParDict":
Code:
applications/utilities/parallelProcessing/decomposePar/decomposeParDict
The part you are looking for says this:
Code:
//// Is the case distributed
//distributed     yes;
//// Per slave (so nProcs-1 entries) the directory above the case.
//roots
//(
//    "/tmp"
//    "/tmp"
//);
Now, I had a hard time understanding how "/data" is really structured on each node, namely:
  • Is it "/data/node1"?
  • Or "/data/user" in all, but different disks?
Either way, lets assume "/data/node*" and the case name being the good old "damBreak" tutorial. Given the indication above, you'll need to place a copy of the whole tutorial on each "/data/node*/". The steps should be something like:
  1. On the workstation or master node, setup the case and decompose - basing myself on the "tutorials/multiphase/interFoam/laminar/damBreak" case:
    Code:
    blockMesh
    cp 0/alpha1.org 0/alpha1
    setFields
    decomposePar
  2. Edit the "system/decomposeParDict" and set the roots like this (4 processes, 3 slaves):
    Code:
    distributed     yes;
    roots           (
    "/data/node1"
    "/data/node2"
    "/data/node3"
    );
    Note that the master node (node0) is not mentioned.
  3. Copy/clone the whole "damBreak" folder to each node, having this structure:
    Code:
    /data/node0/damBreak
    /data/node1/damBreak
    /data/node2/damBreak
    /data/node3/damBreak
    Therefore, each node will have a copy of the whole case.
    If you wish to save space, you'll need to do some trial and error in figuring out what's necessary on each node. So for now, lets stick to what works
  4. Run the solver in parallel, starting from the node0 (directly or indirectly). Since I was testing multi-roots in a single machine, I simply ran on my "/data/node0/damBreak":
    Code:
    foam -s -p  interFoam
  5. When it is done, you'll have 4 asynchronized cases. In the master node, you'll only have the "/data/node0/damBreak/processor0" filled up. On node1: "/data/node1/damBreak/processor1". And so on. To sync them back, you can do something like this, also running from "damBreak" at node0:
    Code:
    for  a in 1 2 3; do rsync -a node$a:/data/node$a/damBreak/ ./;  done
    That should sync them back into the master node0.
  6. Then run:
    Code:
    reconstructPar
    And you're done
« Prev     Main     Next »
Total Comments 2

Comments

  1. Old Comment
    Hi Bruno,

    For parallel OpenFoam run with distributed storage, does the roots need to be absolute path?
    If the local temporary scratch disk for each node was set to an environmental variables "$TMPDIR" (same name but different location), can it be used in the decomposeParDict as follows,

    distributed yes;
    roots (
    "$TMPDIR"
    "$TMPDIR"
    "$TMPDIR"
    );

    I suspect OpenFoam will replace $TMPDIR for each roots with path of the master node only (as the application starts on master node). Is there a way to check if the OpenFoam is using the correct distributed storage?
    permalink
    Posted February 1, 2015 at 17:46 by katakgoreng katakgoreng is offline
  2. Old Comment
    Hi katakgoreng,

    Quote:
    Originally Posted by katakgoreng View Comment
    I suspect OpenFoam will replace $TMPDIR for each roots with path of the master node only (as the application starts on master node). Is there a way to check if the OpenFoam is using the correct distributed storage?
    The application decomposePar is only used in the master node, therefore it will only use the local temporary folder location.
    In order to use the temporary folder for a specific node (machine), then you'll have to first ask that node (machine) what is the path it's using for a temporary folder.

    Best regards,
    Bruno
    permalink
    Posted February 1, 2015 at 18:06 by wyldckat wyldckat is offline
 

All times are GMT -4. The time now is 03:17.