running OpenFoam in parallel
Hi,
I am pretty to new to OpenFoam seeing from the number of posts that i have posted in the last few days.. I had a lot of problems running LES of my case. And finally I got it running today. out high performane cluster has 16 processors per node and i was successful in running the case on 1 node with 16 processors. However I couldn't run it on 2 nodes with 16 processors per node. here are the details of the decomposeParDict Code:
/*--------------------------------*- C++ -*----------------------------------*\ Code:
#!/bin/tcsh Code:
[0] when i ran decomposePar, it did produce 32 processor directories..I am not sure why i get this error.. Any help would be greatly appreciated.. Thanks a tonne in advance.. Vishwa |
Vishwa,
You need to specify the metis coefficients in the decomposeParDict file, so there need to 32 coefficients. See below: /*--------------------------------*- C++ -*----------------------------------*\ | ========= | | | \\ / F ield | OpenFOAM: The Open Source CFD Toolbox | | \\ / O peration | Version: 1.6 | | \\ / A nd | Web: www.OpenFOAM.org | | \\/ M anipulation | | \*---------------------------------------------------------------------------*/ FoamFile { version 2.0; format ascii; class dictionary; location "system"; object decomposeParDict; } // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // numberOfSubdomains 32; method metis; simpleCoeffs { n ( 2 1 1 ); delta 0.001; } hierarchicalCoeffs { n ( 2 2 1 ); delta 0.001; order xyz; } metisCoeffs { 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 } manualCoeffs { dataFile ""; } distributed no; roots (); |
vishwa and feijooos:
I think both of you have some experience to set up openfoam in cluster. Please, really please give me some advice on it. I am trying to set up openfoam in the cluster, but it is really sad that I do not know how to do it. There is gcc4.4.1 and openmpi 1.3.3 in the cluster. I should use all of the complier in the cluster. I am not allowed to use any complier in the thirdparty when the code is run in multi-nodes. So, I need to compile the openfoam with the compile in cluster. First question, Are gcc 4.4.1 and openmpi 1.3.3 enough for compile openfoam 1.5? Second question, if it is enough , how do I do it? I have changed the complier option and mpi setting up option. As follows # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # WM_COMPILER_INST = OpenFOAM | System # WM_COMPILER_INST=OpenFOAM (Orginal) WM_COMPILER_INST=System (changed) case "$WM_MPLIB" in OPENMPI) mpi_version=openmpi-1.2.6 # export MPI_HOME=$WM_THIRD_PARTY_DIR/$mpi_version # export MPI_HOME=$WM_THIRD_PARTY_DIR/$mpi_version (orginal) # export MPI_ARCH_PATH=$MPI_HOME/platforms/$WM_OPTIONS (orginal export MPI_ARCH_PATH=/opt/openmpi/1.3.3/gcc-4.4.1 (changed) It could compile. But there is some error information as follows: Note: ignore spurious warnings about missing mpicxx.h headers + wmake libso mpi Making dependency list for source file OPwrite.C could not open file ompi/mpi/cxx/pmpicxx.h for source file OPwrite.C could not open file ompi/mpi/cxx/constants.h for source file OPwrite.C could not open file ompi/mpi/cxx/functions.h for source file OPwrite.C could not open file ompi/mpi/cxx/datatype.h for source file OPwrite.C could not open file ompi/mpi/cxx/exception.h for source file Opwrite.C Who would like to give me some idea about it? |
sorry, I can't help you. I don't know much about installing OF.
|
dear flying,
it seems like you haven't set the path to the openmpi includes on your cluster correctly. The dependency builder is not able to find the header files. MPI_ARCH_PATH should point to the path where the include directory of openmpi is located. And don't forget to set WM_MPLIB in etc/bashrc within the OpenFOAM directory correctly. regards, Matthias |
Thank you for your reply!
I have changed the path and set the WM_MPLIB in etc/bashrc. But it still doesn't work. Best wishes! |
Dear flying,
this is what my settings.sh look like for OpenMPI: OPENMPI) mpi_version=1.3.3 export MPI_HOME=/software/all/openmpi/$mpi_version export MPI_ARCH_PATH=$MPI_HOME # Tell OpenMPI where to find its install directory export OPAL_PREFIX=$MPI_ARCH_PATH _foamAddPath $MPI_ARCH_PATH/bin _foamAddLib $MPI_ARCH_PATH/lib export FOAM_MPI_LIBBIN=$FOAM_LIBBIN/$mpi_version unset mpi_version ;; Within the /software/all/openmpi/1.3.3 directory is the include folder of openmpi /software/all/openmpi/1.3.3/include/openmpi. Your dependency builder is looking for the folder ompi in this directory. E.g. one line in In your post looks like: could not open file ompi/mpi/cxx/pmpicxx.h for source file OPwrite.C So it seems like you still haven't set the path correctly. Check that these directories are within the path you specified for MPI_ARCH_PATH. According to your specification the file /opt/openmpi/1.3.3/gcc-4.4.1/include/openmpi/ompi/mpi/cxx/pmpicxx.h should exist. I sligthly wonder about the gcc-4.4.1 folder in your path. Maybe try MPI_ARCH_PATH=/opt/openmpi/1.3.3. Regards, Matthias |
Pararllel Running
Hi there,
I am a beginner user of the OpenFoam...I really need to learn how to run openfoam with parallel processing. I read the all the available guideline in this site :(www.OpenFoam.com)...but I have basic problem with that. First of all I can not find "etc/hosts " in OpenFoam Directories where I should define the host names of the machines. Since I am not familiar with that kind of file , I would be so grateful if anybody provide me one of these kind of files. 2. I read somewhere that I have to create a "file that contains the host names of the machines. The file can be given any name and located at any path. " I could not understand what does it means ...or what kind of the files do i have to create....can anybody give a hint or example about that. 3. In decompose Dict, the roots entry is a list of root paths, <root0>, <root1>, . . . , for each node: roots <nRoots> ( "<root0>" "<root1>" ... ); I need somebody help me throough this too.. AS you can see , I do not know anything about the configuration of running the case in parallel. I would be so grateful if you can help me Thanks alot Farhangi Quote:
|
same problem
Quote:
Any help will be appreciated. |
openfoam in cluster
I had a similar problem, and solved it simply running the Allwmake script in the thirdparty folder.
|
I have used two machines (laptop and desktop) which are connected via cross cable.
Quote:
I added to /etc/hosts of both machines: 192.168.1.1 maysam-desktop 192.168.1.2 maysam-laptop Also set ip same as above for both of them. After trying to run, this error is seen: Quote:
|
I don't know if this comes too late, but heres a sample of my decomposeParDict:
numberOfSubdomains 6; //number of cpus you'll use method scotch; //method of decomposition simpleCoeffs { n ( 2 2 2 ); delta 0.001; } hierarchicalCoeffs { n ( 1 1 1 ); delta 0.001; order xyz; } // in my case I use scotch decomposition, so I specify the processsor weight for each cpu scotchCoeffs { processorWeights ( 1 1 1 1 1 1 ); } manualCoeffs { dataFile ""; } distributed yes; //distribute the data on diferent harddrives roots 5 ( "/home/davidal" "/home/davidal" "/home/davidal" "/home/davidal" "/home/davidal" ); since there are 6 cpus, there is one master and 5 slaves. the master will be processor 0, so there's no need to specify a root to it, the other 5 cpus are slave. note that one machine can have more than one cpu. in my case, the case file is located under /home/davidal for all my cpus. see http://www.cfd-online.com/Forums/ope...am-solved.html for more info In my case i will have 6 folders of info in each cpu, all named /processor0, /processor1, /processor2, /processor3, /processor4 and /processor5. Since I have 2 cpu's per machines i will be saving data on only 2 of these folders on each machine. machine1 will save data on processor0 and processor1 machine2 will save data on processor2 and processor3 machine3 will save data on processor4 and processor5 hope this helps, I'm quite new to OF and struggled with this for a bit, but now its working fine :) |
Quote:
Thanks for your post. My problem has been solved. you can see its solution at: http://www.cfd-online.com/Forums/ope...tml#post297100 Regards, Maysam |
Thanks, David, it is of great help to me.
|
Hi All,
I would run a parallel case on a server. I read that I should give this command: mpirun -np 4 -hostname FILE buoyantPimpleFoam -parallel where FILE is a text file where I write the name of the machine I am running my case. Does this file need any header or it is enough a single line with the name of the machine? Thanks, Samuele |
Hi Samuele,
I'm going to quote (minor changes to make it more legible): Quote:
Best regards, Bruno |
Thanks a lot Bruno.
I solved it. I opened another thread (maybe you can check it or delete it and continue here) to understand how to put my process in background, when running on the server. Thanks a lot, Samuele |
Hi Samuele,
:confused: OK... I guess the thread you speak of is this: http://www.cfd-online.com/Forums/ope...el-server.html I'll answer it there... Best regards, Bruno |
Perfect.
Thanks a lot! |
Need help with parallel run
I am trying to do an LES of roundjet using parallel computing. I used the decomposePar utility to decompose the domain. My workstation is equipped with 2 quadcore processors with multithreading (2 threads/core). Does decomposition into more than 8 subdomains make any difference?
|
All times are GMT -4. The time now is 16:15. |