|
[Sponsors] |
![]() |
![]() |
#1 |
Member
Markus Weinmann
Join Date: Mar 2009
Location: Stuttgart, Germany
Posts: 77
Rep Power: 17 ![]() |
Hi all,
Running my case on a single core works fine (OF-1.5.x, simpleFoam, second order, default relaxation). When I try to run the same in parallel mode the simulations keep blowing up after some iterations. Up to now, I havent found a way to avoid diveregence in parallel. Reducing underrelaxation and first order does not help! I am using my own turbulence model library which is loaded in the controlDict file. When I decompose the case, the follwoing warning occurs: --> FOAM Warning : From function dlLibraryTable::open(const fileName& functionLibName) in file db/dlLibraryTable/dlLibraryTable.C at line 79 could not load /rhome/mw405/OpenFOAM/mw405-1.5.x/lib/linux64GccDPOpt/libmyincompressibleRASMode ls.so: undefined symbol: _ZN4Foam14incompressible8RASModel30dictionaryConst ructorTablePtr_E I dont know what this message is trying to tell me. How does it relate to the problem with running in jobs in parallel mode? Does anyone know what is going wrong here? Regards, Markus |
|
![]() |
![]() |
![]() |
![]() |
#2 |
Assistant Moderator
Bernhard Gschaider
Join Date: Mar 2009
Posts: 4,225
Rep Power: 51 ![]() ![]() |
Hi Markus!
That kind of symbol is usually a C++-symbol that was mangled to look like a C symbol (http://en.wikipedia.org/wiki/Name_mangling). Using the c++filt-command you can get the C++-name which in your case would be Foam::incompressible::RASModel::dictionaryConstruc torTablePtr_. This means that decomposePar does not know about turbulence-models (why should it?). Best thing you can do is comment out the libs entry in the controlDict that you used to load your turbulence-model. Decompose. Uncomment the entry. Run. Bernhard PS: of course the --remove-libs-option of the PyFoam-utilities does the commenting/uncommenting for you. But that is advertisment
__________________
Note: I don't use "Friend"-feature on this forum out of principle. Ah. And by the way: I'm not on Facebook either. So don't be offended if I don't accept your invitation/friend request |
|
![]() |
![]() |
![]() |
![]() |
#3 |
Member
Markus Weinmann
Join Date: Mar 2009
Location: Stuttgart, Germany
Posts: 77
Rep Power: 17 ![]() |
Thanks Bernhard,
commenting out the libs entry before decomposing the case eliminates the FOAM warning :-) However, the main problem of divergence when running in parallel still persists. So far, I couldn't find a way to stabilise the parallel runs sufficiently such that divergence does not occur (I tried: frist order and reduced relaxation). Again, the same setup works without problems for serial runs (second order, default relaxation). Do you have any ideas on this issue? Regards, Markus |
|
![]() |
![]() |
![]() |
![]() |
#4 |
Assistant Moderator
Bernhard Gschaider
Join Date: Mar 2009
Posts: 4,225
Rep Power: 51 ![]() ![]() |
Hi!
Sorry. I thought the libs were the problem. No idea what could be the case. Just one vague idea: if you're manipulating single cells of a field a correctBoundaryConditions-call might distribute those changes to the boundary patches on other processors Can you check whether the blow-up happens on the the processor-boundaries? Bernhard
__________________
Note: I don't use "Friend"-feature on this forum out of principle. Ah. And by the way: I'm not on Facebook either. So don't be offended if I don't accept your invitation/friend request |
|
![]() |
![]() |
![]() |
![]() |
#5 |
Senior Member
Wolfgang Heydlauff
Join Date: Mar 2009
Location: Germany
Posts: 136
Rep Power: 21 ![]() |
in the global controlDict change the "floatTransfer" to 0 (before:1).
see other threat for this. try running the case in serial for some timesteps and use result for parallelrun. |
|
![]() |
![]() |
![]() |
![]() |
#6 |
Member
Markus Weinmann
Join Date: Mar 2009
Location: Stuttgart, Germany
Posts: 77
Rep Power: 17 ![]() |
Hi
From the beginning I was running parallel jobs with "floatTransfer" set to 0. Also, restarting from converged results of a single processor simulation does not work. I have a strong feeling that my turbulence model does not like running in parallel mode. I just need to figure out why. Thanks for now. Markus |
|
![]() |
![]() |
![]() |
![]() |
#7 |
Senior Member
Eugene de Villiers
Join Date: Mar 2009
Posts: 725
Rep Power: 21 ![]() |
If you are running anything that has been customised, switch it off and/or use an existing component.
If not try looking at the partially diverged flow field to see where the problem originates from. Run the case in serie, decompose. Run in parallel dumping every timestep (or at least often enough to see the onset of instability). Reconstruct and look at the results. |
|
![]() |
![]() |
![]() |
![]() |
#8 |
Member
Markus Weinmann
Join Date: Mar 2009
Location: Stuttgart, Germany
Posts: 77
Rep Power: 17 ![]() |
I now confiremed that my turbulence model does not run properly in parallel mode.
In order to determine the solution of a nonlinear equation, I need to loop over all elements of a volScalarField (see code below). I am sure that this looping causes my troubles in parallel mode. Unfortunately, I don't understand why it fails in parallel and works in serial mode. As far as I know, looping over the elements of a volScalarField does not include boundary patches. Do I have to take extra care of the boundary/processor patches or am I missing something more fundamental? Any ideas are appreciated. Regards Markus //--------------------------------------- volScalarField P1 = (A3_*A3_/27.0 + (A1_*A4_/6.0 - 2.0/9.0*A2_*A2_)*I2S - 2.0/3.0*I2O)*A3_; volScalarField P2 = P1*P1 - pow( (A3_*A3_/9.0 + (A1_*A4_/3.0 + 2.0/9.0*A2_*A2_)*I2S + 2.0/3.0*I2O) , 3.0); volScalarField Nsol = P2/P2; forAll(Nsol, i) { if (P2[i] >= 0.0) { Nsol[i] = ...} else{ Nsol[i] = ... } } // compute nut_ using Nsol ... //--------------------------------------- |
|
![]() |
![]() |
![]() |
![]() |
#9 |
Member
Markus Weinmann
Join Date: Mar 2009
Location: Stuttgart, Germany
Posts: 77
Rep Power: 17 ![]() |
On a different thread I found that divergence may be caused by a linear solver bottoming out.
Does anyone know what happens in such a case? I am asking because the resdiuals of the omega equation look a bit suspicious. When things start to go wrong the residul for omega drops to 1e-26, whereas all other residuals are roughly 20 orders of magnitude higher. Does such a behaviour make sense? Markus |
|
![]() |
![]() |
![]() |
![]() |
#10 |
New Member
Jens Wunderlich-Pfeiffer
Join Date: Mar 2009
Location: Berlin
Posts: 12
Rep Power: 17 ![]() |
Hi everybody!
I have problems with parallel run, too. (OF 1.4.1-dev, simpleFoam) My case consists of about one million cells (all hexahedra), but rather complicated geometry. After decomposing it in two pieces, there are minimum 200 processor faces (with metis). Does anybody know, whether it's normal, that the decomposed case is more slowly than the serial one in such a case? The other question: I would like to decompose the case by hand, in "manual method", in order to minimize the number of processor faces. But I think it's not so simple. Can anybody help? Jens PS: I had divergence problems, too. But in fact, I had solved this, by running the case in serial a few timesteps and then in parallel. |
|
![]() |
![]() |
![]() |
Thread Tools | Search this Thread |
Display Modes | |
|
|
![]() |
||||
Thread | Thread Starter | Forum | Replies | Last Post |
Differences between serial and parallel runs | carsten | OpenFOAM Bugs | 11 | September 12, 2008 12:16 |
Parallel runs using LAM | sek | OpenFOAM Running, Solving & CFD | 11 | February 13, 2008 08:36 |
Data storage for parallel runs | nordborg | OpenFOAM Running, Solving & CFD | 1 | October 9, 2007 05:19 |
parallel runs | Andy F | CFX | 1 | March 5, 2006 17:32 |
Distributed parallel runs on ANSYS CFX 10 | Manoj Kumar | CFX | 4 | January 25, 2006 09:00 |