CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > OpenFOAM

Running in parallel

Register Blogs Members List Search Today's Posts Mark Forums Read

Reply
 
LinkBack Thread Tools Display Modes
Old   October 10, 2007, 23:51
Default Hello .... Im a new user a
  #21
New Member
 
Jonathas Assunção de Castro
Join Date: Mar 2009
Posts: 11
Rep Power: 9
jonmec is on a distinguished road
Hello ....

Im a new user and Im trying to run my case in Parallel way.

Ive already decomposed my case, but I had problems when I tried to start a LAM ....

please, someone help me!

what is the comand to start LAM?
jonmec is offline   Reply With Quote

Old   October 11, 2007, 02:38
Default Hi Jonathas OpenFoam User G
  #22
Senior Member
 
Cedric DUPRAT
Join Date: Mar 2009
Location: Belgium
Posts: 179
Rep Power: 9
cedric_duprat is on a distinguished road
Hi Jonathas

OpenFoam User Guide
3.4 Running applications in parallel
3.4.2 Running a decomposed cas
3.4.2.1 Starting a LAM multicomputer (U-82)

regards,
cedric
cedric_duprat is offline   Reply With Quote

Old   November 5, 2008, 06:27
Default Hi everyone, I want to run
  #23
Senior Member
 
Gijsbert Wierink
Join Date: Mar 2009
Posts: 383
Rep Power: 10
gwierink is on a distinguished road
Hi everyone,

I want to run my case in parallel, first with just two nodes on my laptop. When I try to decompose the case with decomposePar I get the error

/*---------------------------------------------------------------------------*\
| ========= | |
| \ / F ield | OpenFOAM: The Open Source CFD Toolbox |
| \ / O peration | Version: 1.4.1 |
| \ / A nd | Web: http://www.openfoam.org |
| \/ M anipulation | |
\*---------------------------------------------------------------------------*/

Exec : decomposePar . bubbleCellpar
Date : Nov 05 2008
Time : 12:10:41
Host :
PID : 15597
Root : /home/gijsbert/OpenFOAM/run
Case : bubbleCellpar
Nprocs : 1
Create time

Time = 0
Create mesh


Calculating distribution of cells
Selecting decompositionMethod simple


--> FOAM FATAL ERROR : Wrong number of processor divisions in geomDecomp:
Number of domains : 2
Wanted decomposition : (2 2 1)

From function geomDecomp::geomDecomp(const dictionary& decompositionDict)
in file geomDecomp/geomDecomp.C at line 53.

FOAM exiting


My composeParDict looks like this:

numberOfSubdomains 2;

method simple;

simpleCoeffs
{
n (1 2 1);
delta 0.001;
}

hierarchicalCoeffs
{
n (1 1 1);
delta 0.001;
order xyz;
}

metisCoeffs
{
processorWeights
(
1
1
);
}

manualCoeffs
{
dataFile "";
}

distributed no;

roots
(
);


From the error output it looks like I only use 1 processor while I try to divide the case over two. Can anyone help me on this problem?

Thank you in advance
__________________
Regards, Gijs
gwierink is offline   Reply With Quote

Old   November 5, 2008, 06:42
Default hi, can u tell me how man
  #24
Senior Member
 
sivakumar selvaraju
Join Date: Mar 2009
Location: Cape Town - South Africa
Posts: 186
Rep Power: 9
sivakumar is on a distinguished road
Send a message via Skype™ to sivakumar
hi,
can u tell me how many processor u r going to use????
sivakumar is offline   Reply With Quote

Old   November 5, 2008, 06:55
Default hey man, go to u r de
  #25
Senior Member
 
sivakumar selvaraju
Join Date: Mar 2009
Location: Cape Town - South Africa
Posts: 186
Rep Power: 9
sivakumar is on a distinguished road
Send a message via Skype™ to sivakumar
hey man,
go to u r decomposeParDict in system file, in that
simpleCoeffs
{
n (2 1 1);
delta 0.001;
}
...

then
metisCoeffs
{
processorWeights
(
1
1
);

just do this changes then try to decompose
i hope it will work.

by
siva
sivakumar is offline   Reply With Quote

Old   November 5, 2008, 10:51
Default Hello Gijsbert, It looks to m
  #26
Senior Member
 
dmoroian's Avatar
 
Dragos
Join Date: Mar 2009
Posts: 648
Rep Power: 12
dmoroian is on a distinguished road
Hello Gijsbert,
It looks to me that you've chosen to have 2 partitions, and the splitting algorithm is simple. However, you specified that the algorithm should split in 2 along x, 2 along y, and 1 along z, which makes 4 partitions.
Although you show the correct setting:
Quote:
simpleCoeffs
{
n (1 2 1);
delta 0.001;
}
the decomposePar sees differently:
Quote:
Wanted decomposition : (2 2 1)
so you probably modified in the wrong file...

I hope this is helpful,
Dragos
dmoroian is offline   Reply With Quote

Old   November 6, 2008, 07:56
Default Hi guys, Many thanks for yo
  #27
Senior Member
 
Gijsbert Wierink
Join Date: Mar 2009
Posts: 383
Rep Power: 10
gwierink is on a distinguished road
Hi guys,

Many thanks for your replies.

@ siva:
I am trying to decompose my case to use the two cores of my dual core laptop, just to try if it runs. Soon I will get a quadcore, so I want to be able to do parallel runs. For that I copied the /system/decomposeParDict from the interFoam/damBreak tutorial into my case file and modified it to have 2 processors and a decomposition of (2 2 1). But somehow during the actual decomposition it does not work.

@ Dragos:
As described above I have modified the /system/decomposeParDict file in case directory and, as you write, during the decomposition that file is appearently not read, or read elsewhere.

Any ideas? Does decomposePar look for decomposeParDict in a different place than I think perhaps?

Rgds, Gijsbert
__________________
Regards, Gijs
gwierink is offline   Reply With Quote

Old   November 6, 2008, 08:38
Default hi, u have 2 processor, bu
  #28
Senior Member
 
sivakumar selvaraju
Join Date: Mar 2009
Location: Cape Town - South Africa
Posts: 186
Rep Power: 9
sivakumar is on a distinguished road
Send a message via Skype™ to sivakumar
hi,
u have 2 processor, but u r splitting u r geometry it to 4, so the the command it will not work.
in that

simpleCoeffs
{
n (1 2 1);
delta 0.001;
}

this is the right way,
if u tried, still u r getting the problem
attach u r problem here, " i mean what the computer says"

may be we can have a look.

by siva
sivakumar is offline   Reply With Quote

Old   November 6, 2008, 10:50
Default Hi siva, Thank you for your
  #29
Senior Member
 
Gijsbert Wierink
Join Date: Mar 2009
Posts: 383
Rep Power: 10
gwierink is on a distinguished road
Hi siva,

Thank you for your reply. I did edit the decomposeParDict as you suggested, but apparently it did not want to save although I did do Ctrl+S. Perhaps it is most fool proof to actually close decomposeParDict before decomposing the case, so that it is saved for sure and no weird things happen. When I tried again today, everything worked fine! So I just ran my first parallel case successfully. Thanks for your quick replies.

Cheers, Gijs
__________________
Regards, Gijs
gwierink is offline   Reply With Quote

Old   March 11, 2009, 09:38
Default Hello all, I have been runn
  #30
New Member
 
Ana Eduarda Sa Silva
Join Date: Mar 2009
Posts: 13
Rep Power: 9
eduardasilva is on a distinguished road
Hello all,

I have been running some cases with simpleFoam in a cylinder using parallel implementations in a Quad core processor. I would like to better understand when does the parallel communication occur between the 4 subdomains. Are the governing equations being solved in subdomain 1 and then being send to sudomain 2? How significant is it to use an implicit coupling method rather than an explicit one?

Thanks in advance,
Eduarda
eduardasilva is offline   Reply With Quote

Old   March 11, 2009, 09:59
Default Edura, The communication o
  #31
Senior Member
 
Prapanch Nair
Join Date: Mar 2009
Location: Bangalore, India
Posts: 105
Rep Power: 9
prapanj is on a distinguished road
Edura,

The communication occurs at the end of each time step. All 4 processors have a copy of the program(binary). The binary can identify the processor number in which it is present. Using this, the processor understands what part of the domain it has to solve. It uses the flow field values along the boundaries of the adjacent subdomains(which are in adjacent processors) as its own boundary conditions. So after each timestep, the flow field values along the edges of the subdomains are exchanged. At the end of the simulations, the subdomains are composed back together to form the domain.

I have mentioned just one way of doing it. For better efficiency, cells may be distributed among processors in a round-robin too. But I hope you get a gist of what happens.

Refer this book : Parallel computing , by Barry Wilkinson and Michael Allen.
prapanj is offline   Reply With Quote

Old   March 12, 2009, 09:19
Default Hello All, Does somebody ha
  #32
Senior Member
 
Rishi .
Join Date: Mar 2009
Posts: 141
Rep Power: 9
hellorishi is on a distinguished road
Hello All,

Does somebody have an example of decomposePar file using the "distributed yes;" and "roots" options?

I would like to use two nodes of a cluster to run OpenFOAM-1.5.x in parallel. I would like to use /tmp or /scratch of local cluster disks, instead of using the mounted ~/ to store the data.
I have enabled passwordless ssh.

Thanks in advance,
Rishi
hellorishi is offline   Reply With Quote

Old   April 13, 2009, 15:39
Default
  #33
Senior Member
 
Tomislav Maric
Join Date: Mar 2009
Location: Darmstadt, Germany
Posts: 279
Blog Entries: 5
Rep Power: 12
tomislav_maric is on a distinguished road
I'm trying to run damBreak tutorial in parallel on a HP 6820s dual core laptop. I've found this thread:

Dual Core CPU

that states it's worth the trouble. I change the number of sub domains in "decomposeParDict" to 2. I'm using simple decomposition and set the coefficient "n" to (2 1 1). The "decomposePar" runs fine, telling me the number of processors is 1 (nProc: 1, with dual core?).

As a result I have two new directories: "processor0" and "processor1" with "0" and "constant" as their subdirectories. checkMesh tells me I have 2268 cells split in two on each "processor". The problem happens when I run "paraFoam -case processor0" (as read from page 64 in OF U-guide). Paraview starts fine, but when I try to import mesh data and click the Apply button, it shuts down and I get this error in console:

*** An error occurred in MPI_Bsend
*** before MPI was initialized
*** MPI_ERRORS_ARE_FATAL (goodbye)
[icarus:12344] Abort before MPI_INIT completed successfully; not able to guarantee that all other processes were killed!


I have NO clue about parallel runs (or cpu architecture and its inner works) and I'm running most of my cases on this laptop so I wanted to speed things up at least a bit. What does this error mean? Also if I try

"mpirun -np 1 interFoam -parallel > log &"

it won't work, but

"mpirun -np 2 interFoam -parallel > log &"

runs fine (the results are written in directories "processor0" and "processor1"). Now, my question is: why does decomposePar tell me that I have nProc: 1 (number of processors 1) and creates processor0 and processor1 directories, while mpirun works only with the argument -np 2? Am I doing something wrong?
tomislav_maric is offline   Reply With Quote

Old   April 14, 2009, 17:23
Default
  #34
Super Moderator
 
Mattijs Janssens
Join Date: Mar 2009
Posts: 1,416
Rep Power: 17
mattijs is on a distinguished road
The MPI_BSend message sounds like a bug. Probably some boundary condition that does an extraneous parallel communication even when not running parallel. Try reconstructPar + postprocessing on the undecomposed case instead.

2) the "nProc:1" message I assume comes from the header. decomposePar runs on one processor only. It can decompose for any number of processors as given in the decomposeParDict.
mattijs is offline   Reply With Quote

Old   April 14, 2009, 18:09
Default
  #35
Senior Member
 
Tomislav Maric
Join Date: Mar 2009
Location: Darmstadt, Germany
Posts: 279
Blog Entries: 5
Rep Power: 12
tomislav_maric is on a distinguished road
Quote:
Originally Posted by mattijs View Post
The MPI_BSend message sounds like a bug. Probably some boundary condition that does an extraneous parallel communication even when not running parallel. Try reconstructPar + postprocessing on the undecomposed case instead.
Thank You, I've tried it already and I've seen that it works fine on the damBreak case. I was worried because I have a case that's pretty expensive and I wanted to try a parallel run on my laptop first.

Quote:
Originally Posted by mattijs View Post
2) the "nProc:1" message I assume comes from the header. decomposePar runs on one processor only. It can decompose for any number of processors as given in the decomposeParDict.
I guess it creates "processor0" and "processor1" for two mesh sub-domains divided for one processor, but two cores? Again, I don't know enough details of comp. architecture to understand this yet, the important thing is that it seems to be working fine for two days now. I'm running interFoam on a pretty heavy case, without complaints, so far.

Thank You,

Tomislav
tomislav_maric is offline   Reply With Quote

Old   March 31, 2011, 18:21
Default Problem running parallel job
  #36
New Member
 
Ankit
Join Date: Mar 2011
Posts: 1
Rep Power: 0
azb162 is on a distinguished road
Hi Foamers,

I have been trying to run biconic25-55Run35 tutorial on two processors.
I have used the simple decomposition scheme in the decomposepar utility to
decompose the mesh. When I run the decomposed case, it runs for a few time steps (5-10) and then it crashes. Can somebody help me debug this problem.

thanks ...
azb162 is offline   Reply With Quote

Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Running fluent parallel with PBS Jaggu FLUENT 1 June 8, 2011 11:29
Parallel running of Fluent Bhanu Gupta FLUENT 3 April 7, 2011 09:32
Problem in running Parallel mamaly60 OpenFOAM Running, Solving & CFD 1 April 19, 2010 11:11
Problems on running cfx in parallel Nan CFX 1 March 29, 2006 04:10
Postprocessing after running in parallel balakrishnan OpenFOAM Pre-Processing 0 March 11, 2005 12:22


All times are GMT -4. The time now is 19:28.