CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM Running, Solving & CFD (https://www.cfd-online.com/Forums/openfoam-solving/)
-   -   Cluster test with openfoam (https://www.cfd-online.com/Forums/openfoam-solving/57919-cluster-test-openfoam.html)

clo April 11, 2006 09:35

Hi everybody, I set up a clust
 
Hi everybody, I set up a cluster (only a server and 1 node) and I wanted to see if it works correctly, so I try to run an openfoam case. I took the OpenFoam users guide (1.3 version )and I found an example using LAM/MPI. (page U-83).
I launch LAM and all was OK:

n-1<12588> ssi:boot:base:linear: booting n0 (clo)
n-1<12588> ssi:boot:base:linear: booting n1 (oscarnode1)
n-1<12588> ssi:boot:base:linear: finished

Then I tried to generate the mesh in the slave node (n1 in my case)

mpirun n1 -np 1 blockMesh $FOAM_RUN/tutorials/interFoam damBreak -parallel </dev/null>& log &

The Output was:
[1] 16717
and nothing more...

OpenFOAM is working fine because I tried on the server and the calculations have no problem

It seems like the slave node isn't working at all...what is the [1] 16717 number?
Has anyone already experienced these kind of jobs?
Thanx ciao

fra76 April 11, 2006 09:42

The " 16717" number is : first
 
The "[1] 16717" number is [1]: first process in background; 16717: PID (Process id) of the process.

It's normal, and it is a consequence of the ambersand at the end of the command line.
If you want to see what really happens, try to run mpirun without redirecting output and without sending it in the background, ie:
mpirun n1 -np 1 blockMesh $FOAM_RUN/tutorials/interFoam damBreak -parallel </dev/null

I usually run openfoam in parellel on a cluster, and it works quite well.
Francesco

clo April 11, 2006 09:51

Thanx for your help! I try it
 
Thanx for your help! I try it but it semms like nothing happen; the output:

/*---------------------------------------------------------------------------*\
| ========= | |
| \ / F ield | OpenFOAM: The Open Source CFD Toolbox |
| \ / O peration | Version: 1.3 |
| \ / A nd | Web: http://www.openfoam.org |
| \/ M anipulation | |
\*---------------------------------------------------------------------------*/

Exec : blockMesh /home/ufftecn1/OpenFOAM/ufftecn1-1.3/run/tutorials/interFoam damBreak -parallel

Probably is something in my cluster...maybe it's a silly question but can you give me a hint to be shure that something is going on?

fra76 April 11, 2006 10:06

I really don't know if you can
 
I really don't know if you can run "blockMesh" in parallel...
If you look at the manual, you can find the standard procedure to run a parallel case.
First, you have to run decomposePar, in order to decompose the computing mesh. Then you can run the solver on the decomposed case, on the same number of processes.
So, you can try to run a tutorial in parallel (i.e, the damBreak with interFoam), using more than one process.
I hope this can help you.
Francesco

clo April 11, 2006 10:11

I will try to do as you said..
 
I will try to do as you said...Grazie ciao

parallelauvi April 12, 2006 01:28

i have a problem with decompos
 
i have a problem with decomposePar for the damBreakFine tutorial case.
when i run decomposePar on the damBreakFine case it exits with a fatal error like this:

--> FOAM FATAL I/O ERROR
Cannot find 'value' entry wgich is required to set the values of the default patch field.
Please add the 'value' entry to the write function of the user defined boundary condition.

....
a file name is mentioned here in the Error:
damBreakFine/0/pd::atmosphere line 51 to 52

in the "pd" file surroundng the lins 51 to 52 is here:

atmosphere
{
type totalPressure;
p0 uniform;
}

=============================================
before running decomposePar i have edited the
decomposeParDict file according to the tutorial and ran these:
1) blockMesh on damBreakFine
2) setFields on damBreakFine

dear clo and Francesco Del Citto or anyone : please help

Auvi

clo April 12, 2006 02:40

Hi auvi, for the moment I run
 
Hi auvi, for the moment I run the damBreak case (not Fine) but view that the boundary conditions are the same, in my 0/p file, line 50-> I read:

atmosphere
{
type totalPressure;
p0 uniform 0;
value uniform 0;
}


maybe it cans help you...

vkrishna February 22, 2009 23:44

I have a problem running OF in
 
I have a problem running OF in paralled over two linux machines. The solver is sonicTurbFoam.
I am attaching the log for reference:

Exec : sonicTurbFoam -parallel
Date : Feb 19 2009
Time : 12:31:39
Host : soorya
PID : 5104
Case : /home/openfoam15/OpenFOAM/vijay-1.5/run/vayumach2clus
nProcs : 4
Slaves :
3
(
soorya.5105
kidambiHP219.4565
kidambiHP219.4566
)

Pstream initialized with:
floatTransfer : 1
nProcsSimpleSum : 0
commsType : nonBlocking

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time

Create mesh for time = 0

Reading thermophysical properties

Selecting thermodynamics package hThermo<puremixture<consttransport<speciethermo<hc onstthermo<perfectgas>>>>>
1 additional process aborted (not shown)




Also this is the error messages I get:
Thu Feb 19 12:31:36 IST 2009
nohup: appending output to `nohup.out'
[soorya:05105] *** An error occurred in MPI_Waitall
[soorya:05105] *** on communicator MPI_COMM_WORLD
[soorya:05105] *** MPI_ERR_TRUNCATE: message truncated
[soorya:05105] *** MPI_ERRORS_ARE_FATAL (goodbye)
[soorya:05104] *** An error occurred in MPI_Waitall
[soorya:05104] *** on communicator MPI_COMM_WORLD
[soorya:05104] *** MPI_ERR_TRUNCATE: message truncated
[soorya:05104] *** MPI_ERRORS_ARE_FATAL (goodbye)
mpirun noticed that job rank 2 with PID 4565 on node kidambiHP219 exited on signal 15 (Terminated).
Command exited with non-zero status 1
0.02user 0.01system 0:06.14elapsed 0%CPU (0avgtext+0avgdata 0maxresident)k
4752inputs+16outputs (27major+2408minor)pagefaults 0swaps


All times are GMT -4. The time now is 10:02.