OpenFoam with PBS job manager
Is there any official tutorial or example regarding OpenFoam and PBS job scheduler (the qsub command)? I saw some pages on the web but it seems that they are trying to show how to solve a problem first.
I actually don't care about that since my job is to install (have already done) and check it users can submit their jobs to the cluster. So, I need an example which shows how to submit a solved problem (meshing, ...) to the job scheduler. Basically, I write a pbs script and put "ls" command in it and submit it to the scheduler to see if all nodes are working and communicating. So, I am searching for a similar simple command with respect to OpenFoam. Any idea for that? |
Hi mahmoodn,
You can use blockMesh and snappyHexMesh provided by OpenFOAM for meshing. For testing PBS job scheduler, you can simply pick up a tutorial case from $FOAM_TUTORIALS directory and test with that case. The $FOAM_TUTORIALS/incompressible/icoFoam/cavity is a good start. You need to copy it to your working directory, and run blockMesh inside the case directory. After meshing is done, run icoFoam to begin your simulation. P.S.: don't forget to source OpenFOAM bashrc first :D |
Do you have any idea about this:
Code:
mahmood@cluster:of-test$ source /export/apps/mechanics/OpenFOAM/OpenFOAM-2.3.0/etc/bashrc |
Hi mahmoodn,
Have you modified the bashrc from OpenFOAM? What's the value of your $FOAM_INST_DIR and $WM_PROJECT_INST_DIR? |
Yes, I modified it during the installation. Lines associated with that is
Code:
export WM_PROJECT=OpenFOAM Code:
$ ls /export/apps/mechanics/OpenFOAM/OpenFOAM-2.3.0/ Code:
$ echo $FOAM_INST_DIR |
Hmm. Seems the shell variables aren't exported correctly.
What's the output of export | grep -E '(FOAM_|WM_)'? What's the value of your $PATH and $LD_LIBRARY_PATH? |
Check it
Code:
$ export | grep -E '(FOAM_|WM_)' |
Seems your applications aren't installed to $FOAM_APPBIN.
Have you exported WM_COMPILER=Gcc48 before building OpenFOAM? This may be the issue. If so, just rebuild OpenFOAM without exporting that. |
I ran it again. Please see the variables first
Code:
root@cluster:ThirdParty-2.3.0# module load openmpi-x86_64 || export PATH=$PATH:/usr/lib64/openmpi/bin This time, when I run blockMesh, I see its usage :) Code:
root@cluster:OpenFOAM-2.3.0# blockMesh Question: What commands should I put in the .bashrc or /etc/profile (for all users)? These two? Code:
root@cluster:ThirdParty-2.3.0# module load openmpi-x86_64 || export PATH=$PATH:/usr/lib64/openmpi/bin |
Hi mahmoodn,
Good to see it working :D Quote:
Just keep the user variables same with those when you built OpenFOAM :p. Regards, Weiwen |
It seems that the following line has no effect in /etc/profile and ~/.bashrc
Code:
alias of230='module load openmpi-x86_64; source /export/apps/mechanics/OpenFOAM/OpenFOAM-2.3.0/etc/bashrc WM_NCOMPPROCS=4 foamCompiler=ThirdParty WM_COMPILER=Gcc48 WM_MPLIB=SYSTEMOPENMPI' Code:
source /export/apps/mechanics/OpenFOAM/OpenFOAM-2.3.0/etc/bashrc WM_NCOMPPROCS=4 foamCompiler=ThirdParty WM_COMPILER=Gcc48 WM_MPLIB=SYSTEMOPENMPI |
Quote:
Code:
alias of230='module load openmpi-x86_64; source /export/apps/mechanics/OpenFOAM/OpenFOAM-2.3.0/etc/bashrc WM_NCOMPPROCS=4 foamCompiler=ThirdParty WM_COMPILER=Gcc48 WM_MPLIB=SYSTEMOPENMPI' The aliases are useful when there are multiple OpenFOAM versions as it wouldn't mess up the environment variables ;). Regards, Weiwen |
OK I got it... So the correct thing is to first run of230 and then blockMesh.
Thanks for that. I will move on to the tutorial for PBS. |
Hi again,
I ran blockMesh and then icoFoam in the $FOAM_TUTORIALS/incompressible/icoFoam/cavity. I saw some outputs without any error and it seems it is working. Is there any multithreaded (needing more than core) example which I want to test with PBS? |
Quote:
The following steps may be helpful. 1. Copy the propeller case to your working directory. 2. Run ./Allrun.pre to generate mesh and set initial conditions. 3. Run decomposePar to decompose the domain into 4 parts. 4. Prepare the PBS submit scripts. A simple example may look like this: Code:
#!/bin/bash Regards, Weiwen |
Some good news and some bad news...
Good: I am able to run the mpi command on master node as well as the compute node. In other words, the following commands works as expected. Code:
of230 However, when I submit the job, the output file contains the following error: Code:
mahmood@cluster:propeller$ qsub submit.tor |
Regarding the error
Code:
Cannot read "/home/mahmood/system/decomposeParDict" Code:
mahmood@cluster:propeller$ pwd Any idea for that? |
OK. I think I found the problem. According to Basics: Set-up for different desktop machines (post #7 and after that), I found that my home has not been exported to the NFS! (have to say that I created my account on the server only).
So, let me test something and I will be back. |
Hi mahmoodn,
You need to setup NIS server and client for sharing account info. Moreover, the home of compute nodes should be mounted via NFS. Regards, Weiwen |
OK I found the problem:)
In the PBS script, we have to add cd $PBS_O_WORKDIR abd this is an important thing. Without that and by adding pwd before the mpi command in the PBS script, I saw that the working directory was /home/mahmood which is actually $HOME and for that reason it appended system/decomposeParDict to that. With cd $PBS_O_WORKDIR, the working directory is correctly set to the location where propeller exists. So, everything is OK now and I am able to run a OpenFoam job. Have to tell the guys for that. Thanks.:) |
All times are GMT -4. The time now is 01:28. |