CFD Online Discussion Forums

CFD Online Discussion Forums (
-   OpenFOAM Running, Solving & CFD (
-   -   Run in parallel a 2mesh case (

cosimobianchini January 8, 2007 13:24

Hi all, I'm trying to run in
Hi all,
I'm trying to run in parallel, on n (say 2) processors, a case with two meshes (say region1 and region2).
In the solver I read the two meshes in such a way:

fvMesh mesh1


fvMesh mesh2


I'm able to succesfully run this case only if I really decompose (part A & B) both the two meshes on the two processors:
processor0 has region1A & region2A
processor1 has region1B & region2B

What I'm looking for is how to run in parallel a case with two meshes decomposing the meshes on different processors:
processor0 has region1
processor1 has region2

This will be very helpful in case of heavier runs and a higher number of processors:
processor0 has region1A
processor1 has region1B
processorn has region2

The problem seems to be just on an initial check that reports a fatal error if it doesn't find region2 in one of the processors.
This the output if I type: mpirun -np 2 conjugatesolver . <case> -parallel

[1] --> FOAM Warning :
[1] From function objectRegistry::checkIn(regIOobject&)
[1] in file db/objectRegistry/objectRegistry.C at line 51
[1] Registering object 'region2' with objectRegistry 'time'
[1] This is only appropriate for registering the regions of a multi-region computation
or in other special circumstances.
Otherwise please register this object with it's region (mesh)
[1] --> FOAM FATAL ERROR : Cannot find file "points" in directory "constant/region2/polyMesh"
[1] From function Time::findInstance(const word& dir, const word& name)
[1] in file db/Time/findInstance.C at line 133.
FOAM parallel run exiting

Is there the possibility to disable this check?
Thank you very much in advance for any help

gschaider January 9, 2007 11:21

Well, cosimo, I think if I und
Well, cosimo, I think if I understand your intention correctly your problem is even more complicated: the conjugateSolver works like this (if I remember correctly)

1. calculate eq1 in region1
2. write boundary values from region1 to region2
3. calculate eq2 in region2
4. write boundary values from region2 to region1
5. start again

In your case that would mean that the work is distributed like this
Step 1: processors 0 to n-1 work. n is idle
Step 2: communication
Step 3: processor n works. All others are idle
Step 4: communication

Even if Step3 takes very few time Amdahl's law is bound to strike here without mercy (in other words the parallelization would not be very efficient)

The alternative would be to either
- fork your solver in two threads: one for region1 and one for region2. These calculate independently and communicate their results at the end/start of each timestep. The question is: has anybody ever tried to write a multithreaded OF-solver? And even then your problem with the mesh would persist
- write two solvers (one for each region) that communicate over some kind of OS-mechanism (sockets pipes). But this too would be ...äh... interesting.

Both approaches have in common that it will be difficult to distribute the workload evenly.

cosimobianchini January 11, 2007 07:33

Thanks a lot Bernhard (I'm st
Thanks a lot Bernhard (I'm still in debt with you for the suggestions on the postprocessing of the same case too).
You are right about the conjugatesolver and I agree with you when you say "Amdahl's law is bound to strike".
From these first rough tests in fact, solving solid + passing boundary conditions (sequential fraction F) takes 30% of singleprocessor time.
This means a maximum theoretical speedup of 3.3 (lim N->inf 1/(F+(1-F)/N)).
In more complex cases, solving solid could be cheaper in terms of cpu time but I believe passing boundary conditions will increase drammatically its relative weight, letting such parallelization be even less effective.
That's way I won't go through the two suggestions you gave me but I'll try parallelizing the problem the other way around (with processor boundaries normal to the solid fluid interface).
Thanks a lot again

All times are GMT -4. The time now is 10:49.