CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Main CFD Forum

basic rules for partition domain for parallel run

Register Blogs Community New Posts Updated Threads Search

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   September 14, 2006, 17:06
Default basic rules for partition domain for parallel run
  #1
phsieh2005
Guest
 
Posts: n/a
Hi,

What will be the general rule of thumb for partitioning the computational domain for a parallel run, specific to flow with free surfaces (VOF method)? Thanks!

phsieh2005
  Reply With Quote

Old   September 14, 2006, 21:14
Default Re: basic rules for partition domain for parallel
  #2
Renato.
Guest
 
Posts: n/a
Well... I would say that the "general rule of thumb" for unstructured grids is by using a graph partitioning software like Metis ( http://glaros.dtc.umn.edu/gkhome/views/metis ) or others like: Jostle, Scotch, Chaco, Party, etc... but, in my opinion, Metis is the easiest to learn and use.

For Cartesian grids you don't need a graph partitioning software since you know exactly how the grid is formed, thus, you can define points and cells ranges according to the number of processors desired.

Regards

Renato.

  Reply With Quote

Old   September 15, 2006, 02:01
Default Re: basic rules for partition domain for parallel
  #3
Charles
Guest
 
Posts: n/a
Just a word of warning here, some VOF methods won't like it if a partition boundary is too close to the free surface. This happens more easily than you might think, because one would typically cluster the mesh quite close to the free surface. So, if gravity is in the Z-direction, partitioning by principal axis is OK, provided you suppress any split perpendicular to the z-axis. The problem with Metis is that you may be unlucky, and a split may coincide with the free surface. So I just tend to split perpendicular to the x-axis.
  Reply With Quote

Old   September 15, 2006, 08:09
Default Re: basic rules for partition domain for parallel
  #4
rt
Guest
 
Posts: n/a
adding to previous comments:

parallelizing free surface code depend on your modeling type:

1. in ther first kind, two-phase approach that NS. Eq. are solved on entire domain with variable density method parallelizing is simple same as other methods, and needs static load balancing.

2. In the second approach, one phas NS. Eq. only solved in liquid phase and gas is assumed as ideal gas or in not considered. In this manner spatial domain is change continuously and you need dynamic load balancing, it is more difficult and tricky for acheiving good performance.

3. Also if you use adaptive refinement in any approach dynamic load balancing is essential.

for any method a good load balancing tool is METIS library, it is very public and easy to use.

  Reply With Quote

Old   September 15, 2006, 08:25
Default Re: basic rules for partition domain for parallel
  #5
phsieh2005
Guest
 
Posts: n/a
Hi,

Thank all of you for the feedback. I will study METIS next.

I am using OpenFOAM, so, it is unstructured mesh and uses VOF to track the free surface (solves both liquid and air in one N-S eq). No elment adaptivity.

One thing I found is that, several times, solution diverged during the parallel run. But, a serial run was able to continue from where parallel run diverged. Also, the number of iterations is higher at each time step in parallel run than in serial. I guess that this is reasonable.

Most of the cases I have, the free surfaces move everywhere inside the domain (like the dam break case or tank sloshing case).

phsieh2005
  Reply With Quote

Old   September 15, 2006, 08:48
Default Re: basic rules for partition domain for parallel
  #6
Renato.
Guest
 
Posts: n/a
Hi phsieh2005,

It seems there's something weird in your parallel algorithm. In fact, a parallel program must produce the same results when compared to its serial version. It's true that little discrepancies on the number of iterations are admissible but if you're facing convergence problems only when running in parallel, your algorithm has, probably, something wrong.

In fact, we could consider a serial program as a parallel program running in only one processor ;o)

Cheers

Renato.

  Reply With Quote

Old   September 15, 2006, 09:50
Default Re: basic rules for partition domain for parallel
  #7
rt
Guest
 
Posts: n/a
as you deal with free surface flow with two-phase approach, your main convergence problem is related to solution of varable coefficient Poisson solver, especially when density ratio is large, as you said problem is related to parallel run it may has several reasons:

1. error in algorithm 2. problem in communication 3. your mistake to implementation

note that difference in convergence rate is related to type of linear solver: if domain decomposition are based on shure or shuartze method undoubtly we have difference but when we have direct parallelizing CG or AMG solver only convergence rate difference must be raleted to truncation error that it must be small (you can test this with switching between single and double precision arithmatic)

I propuse that run parallel code on single processor, asign more than one processor on single cpu, if problem is remained, it is not related to case 2.

also change type of solver: CG, Bicgstab and AMG and study problem. do this for various grid resolution.

the best way is sending your problem in OpenFOAM form and ask guide from its users.
  Reply With Quote

Old   September 15, 2006, 11:33
Default Re: basic rules for partition domain for parallel
  #8
Mani
Guest
 
Posts: n/a
>In fact, we could consider a serial program as a parallel program running in only one processor ;o)

That's true only in special cases, but certainly not in general (not even with single phase flow). It should be obvious that the computation of flow over a single continuous domain is not equivalent to the computation on separated subdomains. In the former case, you can use a fully implicit approach that strongly connects all flow regions (at least with a structured grid), while the latter method introduces artificial numerical boundaries. Both methods will usually not yield the same convergence rate, because block-to-block information is usually not implemented on the lowest iteration loop (because the amount of communication would kill any benefit of parallelization!). This is less important on point-implicit methods on unstructured grids, but can still have a noticable effect. Furthermore, on any type of grid, multigrid is often applied only within each subdomain, and therefore becomes less effective than over the whole domain.

For these and multiple other reasons it's perfectly possible that you see a decline in convergence (and possibly even stability) as you move from single domain to domain decomposition. That doesn't prove that you code is not working.
  Reply With Quote

Old   September 15, 2006, 13:07
Default Re: basic rules for partition domain for parallel
  #9
Renato.
Guest
 
Posts: n/a
so... I should consider myself a luck guy since I don't have big differences (4 iterations at most) between the parallel and serial version of my program (which is exactly the same code). In my case, I solve incompressible fluid flow and transport equations in an edge-based Finite Element Method. As I have the flow and the transport solvers I already ran free surface problems with an interface capturing approach (VOF or LS, it'll depend on the marking function chosen).

> It should be obvious that the computation of flow over a single continuous domain is not equivalent to the computation on separated subdomains <

Why not?! Partitioned or not the global domain will be the same. The programmer only need to know where, when and what synchronize to get equivalent results. I'm saying that because I implemented a solver from scratch testing and comparing each operation in each algorithm of my program. It's true that some algorithms and methods are more well suited to message passing parallelism than others (some are practically infeasible) but, in general, the programmer should look for equivalent results in the serial and parallel version of the program.

> because block-to-block information is usually not implemented <

I don't know about what kind of method you based this statement but I would say that it's the more usual way to do distributed parallelism -- at least for those that are learning how a MPI program works. Of course that the amount of messages will be too large, but it's the easiest way to implement (doing communication in the inner loop) and you still get speedups for a modest number of processors. In a more formal approach you should replicate elements lying on the communication interface (ghost elements) in order to avoid the communication in hot loops -- in iterative solvers, probably, the matrix-vector product loop will be the more computationally expensive part of the program. These two approaches should give you equivalent results in your linear solver operations -- matrix-vector, vector norms, etc... -- at least in my case, it always happen (thanks God!)

This subject doesn't have a closed answer since we don't have a general way to implement distributed parallelism. It'll depend on several aspects like: method, type of mesh, algorithms, solvers, etc... There will be always a different algorithm doing different things to reach the same result.

Regards

Renato.

  Reply With Quote

Old   September 15, 2006, 14:39
Default Re: basic rules for partition domain for parallel
  #10
Mani
Guest
 
Posts: n/a
let's think of a 1-D example for simplicity but without loss of generality. Let's say you have 10 grid nodes and you solve your differential equations with some finite difference method, using an implicit approach (e.g. thomas algorithm for 1-D, central scheme). even with the direct method you'll need to perform iterations if your equations are nonlinear. you could either do all 10 poits in one shot per iteration, or you could do node 1-5 and nodes 6-10 in parallel. do you really think you'll get the same convergence? for the parallel computation you'll have to implement an extra step to connect the blocks. how do you do it? if you implement the communication within the thomas algorithm, you may get a very good convergence, but your parallel speed-up is ridiculously poor! instead, you would run the thomas algorithm separately on both partitions, and then communicate across the interface.

that's what I mean by "communication is not in the smallest loop". does your code communicate across blocks after every change in every cell in every equation? I don't think so. maybe thinking of the 1-D example, you will also understand how multigrid can be implemented far more efficiently on the whole domain. low frequency errors are more efficiently reduced if you take all 10 points, instead of using multigrid on the two separate domains. however, implementing multigrid across the interface again would kill parallel performance. of course, if I do not make use of any such acceleration techniques (or if they work poorly to begin with) domain decomposition will not negatively affect performance. if I were to use point Gauss Seidel in my 1-D example, the interface would not be a problem, but point Gauss Seidel would not really be the optimal approach to begin with.

the most efficient methods in CFD take advantage of the (partially) elliptic nature of the equations, for example with implicit solvers that strongly couple the whole flow field, multigrid across the whole domain, implicit residual smoothing, and so on... unless these methods are implemented poorly or not used at all, domain decomposition will most definitely have a noticeable effect.
  Reply With Quote

Old   September 15, 2006, 15:56
Default Re: basic rules for partition domain for parallel
  #11
Renato.
Guest
 
Posts: n/a
Sorry, I don't know the method you cited but I could comment something about your example:

> do you really think you'll get the same convergence? <

Not exactly the same convergence because in floating point arithmetic even the order of calculations has some influence in the final result, but the convergence would be almost the same. Of course, assuming that your computation is treating the interface data accordingly, as should be.

> for the parallel computation you'll have to implement an extra step to connect the blocks. how do you do it? <

Right! In this case it's accomplished by a MPI_ALLREDUCE for the interface node(s). In 1D with 2 partitions, it would be 1 node (one float or double per message per iteration).

> but your parallel speed-up is ridiculously poor! <

What is ridiculously poor for you?! I have 2 methods implemented. The naivest method do a communication at each matrix-vector product within GMRES> Which would kill the parallel performance as you said. The speedup you can check yourself http://www.nacad.ufrj.br/~rnelias/papers/CIL0423.pdf. Of course that this method is not good for massively parallel computations. For such cases I have the ghost elements implemented in order to avoid the interface communication, in matrix-vector products, and the algorithm runs quite well (keeping the convergence, results and stability).

> the most efficient methods in CFD take advantage of the (partially) elliptic nature of the equations, for example with implicit solvers that strongly couple the whole flow field ... <

You destroyed the research plans of several guys out there ;o)

If it were so simple, we should stop our studies, go home and read a newspaper looking for a job.

I wouldn't like to particularize such discussion to one or another method. What I'm trying to say is that discrete methods will generally give us a linear system, this linear system can be solved in message passing parallelism by some kind of partitioning strategy. In **infinite arythmetic precision**, partitioned or not the result MUST be exactly the same (again... assuming that the computations are being done accordingly). Unfortunately, our computers only have a finite precision, which SHOULD lead us only to little differences in our results. Note: I'm not talking about performance, method or algorithm.

Regards

Renato.

  Reply With Quote

Old   September 16, 2006, 03:35
Default Re: basic rules for partition domain for parallel
  #12
rt
Guest
 
Posts: n/a
to: phsieh2005

as an alternative way i propose to run another problem with parallel code, e.g. simple poisson eq. and check previous problem.

I am interesting to knowe reason of this problem, please send final result in this form (is problem related to OpenFOAM kernel!!!)

thanks.
  Reply With Quote

Old   September 16, 2006, 14:45
Default Re: basic rules for partition domain for parallel
  #13
Mani
Guest
 
Posts: n/a
never mind, you just don't get the point... as this wasn't the original question, just let it be... just don't make assertive diagnoses for others based on just your own code... the fact that phsieh2005 sees different convergence behavior for single domain versus domain decomposition does not prove or even indicate that there is something wrong with his method.
  Reply With Quote

Old   September 16, 2006, 15:44
Default Re: basic rules for partition domain for parallel
  #14
Renato.
Guest
 
Posts: n/a
Ok Mani... but, one "basic rule" in message passing parallelism (as the phsieh2005's subject suggests), for me, is that, at least, the parallel solver must be so stable as the serial version is, giving equivalent results, number of iterations and convergence. Otherwise my parallel solver would be good for nothing... (even showing wonderful timing performance).

ps: At least the software manual (OpenFoam in this case) should emphasize something about this weird convergence behavior in its parallel solver.

  Reply With Quote

Old   September 16, 2006, 21:41
Default Re: basic rules for partition domain for parallel
  #15
Mani
Guest
 
Posts: n/a
it wouldn't be so "weird" if you had understood what I tried to explain. maybe you'll actually find something about it in the manual. I haven't checked.

just for curiosity: what method are you using? unstructured? point gauss seidel? are you applying multigrid? if so, on the whole domain or just on the subdomains? if not, why not?
  Reply With Quote

Old   September 17, 2006, 02:40
Default Re: basic rules for partition domain for parallel
  #16
rt
Guest
 
Posts: n/a
Renato and mani: i follow some threads, i think it is useless disscution,

you are partially right, because you talk about separate parallel algorithms,

Renato mean: direct parallelizm of iterative solver (or explicit method) e.g. CG or ... this is equivalatn with parallel implementation of sparce matrix operations that if we neglect effect of floating point errors, result of parallel and serial are the same (such as number of iterations).

Mani mean parallel implementation based on mathematical domain decomposition, e.g. schwartz method solution of problem on each domain (with overlap or witouth overlap) based on pde and cupling between sub-domains with Numann or/and dirichlet bc that leads to iterative method until bc converge. note that if we use one sub-domain iteration is not needed. In this manner convergence needs satisfaction of some criteria.

As an exellent Ref. read Y. Saad book, in this book these methods described in details and any doubt is removed.

Interesting: althouth we expect that parallel schwartz type methods leads to more iteration and performance leak but sometimes it is in reverse and they converge in lower number of iterations and leasd to this conclusion that we use tham in sequential calculation, as a good Ref. see presentation of D. B. Kothe, manager of Telluride project in Los Alamos: "Computational Manufacturing: Toward Simulating Metal Casting and Welding Processes", pages 76-84. (if you don't find it give your contact i will send it to you).

  Reply With Quote

Old   September 17, 2006, 08:33
Default Re: basic rules for partition domain for parallel
  #17
Renato.
Guest
 
Posts: n/a
You're right rt. Our arguments have been based on things completely different and we'll never come to any conclusion. Anyway, there are methods and methods and different ways to do the same thing.

Maybe, a short answer to the original question, made by phsieh2005, would be:

> What will be the general rule of thumb for partitioning the computational domain <

for unstructured grids any graph partitioning software will solve the problem! Metis, e.g.

For the convergence problems: go through the OpenFoam user guide (or theory guide) and try to understand what is implemented there (and, please, share your conclusions here)

[]'s

Renato.

  Reply With Quote

Old   September 17, 2006, 13:36
Default Re: basic rules for partition domain for parallel
  #18
Mani
Guest
 
Posts: n/a
eureka! that's exactly my point: what I was saying all along is that you cannot present limited experience on just one method as "general" truth. I was trying to give counter examples, somewhat unsuccessfully, but I think we finally agreed on the above notice.

I have to disagree that it is useless to discuss, though. It's a digression from the original question of this thread but still an interesting topic. I did take a look at your paper now, Renato, and it made some things a little clearer. The speed up results look good, but you didn't go beyond a dozen CPUs on a relatively large problem. Even with the maximum number of CPUs you tried, each subdomain has still a pretty large number of elements. In other words, the amount of time spent for serial processing of each subdomain is still far larger than the communication penalty. This is especially true for a computing intensive algorithm like FEM using GMRES. Speed up is primarily determined by the ratio of communication time versus solution time. Speed up can be excellent, if the communication is very fast, but ironically you can also get excellent speed up simply by spending a lot of time on the solution in each subdomain. More important is the question how far you can go with your domain decomposition before the speed up drops. Can the algorithm handle subdomains of a few cells, a few dozen cells, or does it break down already on a few hundred cells per subdomain? Every algorithm has its limit, and as you take yours to the limit you may realize that you need to rethink your strategy. Then convergence may become a more important issue. All of what I said in previous posts gains much more significance as the subdomains become smaller. Maybe you just haven't gone there, yet.
  Reply With Quote

Old   September 18, 2006, 03:39
Default Re: basic rules for partition domain for parallel
  #19
rt
Guest
 
Posts: n/a
hi mani, note that i don't mean that this disscusion is not valuable but i think that this don't leads to conclusion.

in previous thread you describe somethink related to scalability (when increasing number of processors, or equivalently when number of nodes/element/working unit that are asigned for each processor are decreased algorithm don't miss its performance), this is very general and is function of algorithm, network type, behavior of solution and ...

generally performance is decreased with this but usually we have anihilating factor that is reducing cache miss that is decreased with increasing data locality and decreasing size of data (some literature report more than 5 times speedup due to reducing cache miss), it is also hardware dependent and cache aware algorithm is also general issue.

do you think that disscussion on such general categoriws leads to conclusion?

also i am interested to knowe that what is details of your experience in this category.

do you think that disscusion about this leads to conclusion
  Reply With Quote

Old   September 18, 2006, 09:34
Default Re: basic rules for partition domain for parallel
  #20
Mani
Guest
 
Posts: n/a
yes, that's right, you'll get the cache benefit as your subdomains are reduced just far enough to fit into cache. no question about that. that will lead to a local spike in your speed-up vs. number of CPUs chart. however, depending on your cache size, you will still have some room for further reduction of block size, and that's when speed-up goes down. the complete speed-up vs. # CPUs chart can actually look quite bumpy, with several ups and downs, depending on your algorithm and architecture.

no, I didn't expect any definite conclusion. roads are for journeys, not for destinations.
  Reply With Quote

Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Working directory via command line Luiz CFX 4 March 6, 2011 20:02
air bubble is disappear increasing time using vof xujjun CFX 9 June 9, 2009 07:59
Concentric tube heat exchanger (Air-Water) Young CFX 5 October 6, 2008 23:17
CFX Solver Memory Error mike CFX 1 March 19, 2008 07:22
New topic on same subject - Flow around race car Tudor Miron CFX 15 April 2, 2004 06:18


All times are GMT -4. The time now is 16:17.