CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > General Forums > Main CFD Forum

Double the # of cells->double the computing time?

Register Blogs Members List Search Today's Posts Mark Forums Read

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   March 26, 2002, 10:12
Default Double the # of cells->double the computing time?
  #1
JB
Guest
 
Posts: n/a
Hi all, I have a simple question.

I am wondering if CFD methods are as simple as "TIME = constant x CELLS". I mean, if the size of the problem is doubled (twice the # of cells), then is the computation time is doubled also? (for both steady and unsteady simulation) Triple the cells, triple the computing time?

Also, if you use parallel computers, two CPUs give me half the computing time? 100 or 10000 CPUs save me time, 1/100 or 1/10000?

Thank you!

JB
  Reply With Quote

Old   March 26, 2002, 10:16
Default Re: Double the # of cells->double the computing ti
  #2
Pete
Guest
 
Posts: n/a
I think generally the trend is nonlinear: if you double the size of the grid the CPU time is more than twice. This may vary from solver to solver.

Your second statement is generally true given the parallel implementation was done well. You should see a linear speedup: 10 processors will converge the solution in 1/10 the time.
  Reply With Quote

Old   March 26, 2002, 10:36
Default Re: Double the # of cells->double the computing ti
  #3
Alton Reich
Guest
 
Posts: n/a
JB,

In my experience, the computation time is fairly close to linear with increasing number of cells. The code I used on a CRAY in a previous life was almost perfectly linear. On the CRAY (since you were sharing with 100 of your closest friends) it was beneficial to sneak runs into the "small job" que so that you didn't have to wait behind many people in the normal que. The "small job" que had a limit to the CPU time that could be used for jobs in that que. We would figure out how many iterations we could get in based on the calculation time per node per iteration. The estimates we would make of run time were almost perfect.

Of course, I'm sure that some codes scale better than others. Also, if you run out of physical memory and have to have to use virtual memory, the computation time will go WAY up after you hit that memory threshold.

Parallel speed up is a little trickier because there are more variables involved. The code we develop here is fairly efficient (I think the speed-up is a factor of 7.8 on 8 machines). However that efficiency is a factor of two things - good programming for parallel computations, and good hardware. We test using a dedicated fast network switch for the machines doing the parallel computation. If you use the regular network in your office and it's being shared by e-mails, web surfing, etc, you may not get the same kind of performance.

Alton
  Reply With Quote

Old   March 26, 2002, 10:48
Default Re: Double the # of cells->double the computing ti
  #4
sylvain
Guest
 
Posts: n/a
For a 2D FEM code, 10 years ago, I observed something between n*log(n) and n^2 for the time vs # of cells law.

As for the // speed up, the Cobalt site used to provide a speed up curve, but it seems to be closed now. If I remember well, the speed up was quasi-linear until 12 cpus, then it decrease drasticaly.

Sylvain
  Reply With Quote

Old   March 26, 2002, 13:37
Default Re: Double the # of cells->double the computing ti
  #5
FRED_PSU
Guest
 
Posts: n/a
I think the answer depends on the applications. If you think all parameters are kept constant, your CPU time will more than just increase linearly with the number of cells. If you double your number of cells, there will be more smaller cells, which usually slow down your convergence towards a given solution. Also, for time accurate runs, the time step in explicit schemes is determined by the smallest cell, so that if you have twice as many cells, but with the smallest cell being half the size of what the previous smallest cell was, then your CPU time to reach the same result is multiplied by 4, not 2!
  Reply With Quote

Old   March 27, 2002, 01:41
Default Re: Double the # of cells->double the computing ti
  #6
Charles Crosby
Guest
 
Posts: n/a
In my experience, CPU time per iteration tends to be directly proportional to the number of cells, unless you're using a trivially small number of cells (in which case the CPU's memory cache helps to speed it up) or very large number of cells (when you start using disk swapping, which is desperately slow). However, you may or may not need more iterations to achieve a converged solution. Mostly I think you need more, as others have also indicated. As far as parallel speedup is concerned, this is strongly dependent on the actual parallel implimentation.
  Reply With Quote

Old   March 27, 2002, 03:36
Default Re: Double the # of cells->double the computing ti
  #7
sylvain
Guest
 
Posts: n/a
I don't think it could be linear since core of the code consist in solving an algebraic system with an almost empty matrice of size N^2 (N=# of cells).

From numerical recipes, it will cost N^3 operations using LU decomposition or Gaussian elimination for a full matrice.

Since a 3D FVM matrice have more or less 26 non zero elements by row or column, this power law could be decrease by one order, it will be surprising if it is by two.

Any comment ?

Sylvain
  Reply With Quote

Old   March 27, 2002, 10:42
Default Re: Double the # of cells->double the computing ti
  #8
JB
Guest
 
Posts: n/a
Thanks for your opinions. So, the answer depends. Sometimes linear, sometimes not (in reality). As sylvain pointed out, if we need to solve a linear system, we cannot get O(N) easily (multigrid is O(N)?). Other algorithms may also be important to get O(N) computing time, such as search or sorting algorithms(if used) for example. So, if it is indeed linear, it must be a very good code. I just imagined that as the size of the problem gets larger, even slight deviation from linear trend could make a tremendous difference in the computing time, and actually the problem size is getting quite large these days.
  Reply With Quote

Old   March 28, 2002, 03:40
Default Re: Double the # of cells->double the computing ti
  #9
Charles Crosby
Guest
 
Posts: n/a
Well, there is not much point in speculating about this too much, as it is reasonably easy to measure. Take a look at http://home.mweb.co.za/cc/ccrosby/cf...enchmarks.html These results (for what they are worth) indicate a near linear relationship when you start using realistically sized grids. My take on this is that CFD linear equation solvers tend to be hybrids (like sweeping with a TDMA type of algorithm etc.) of iterative and direct methods, but most time is probably spent on the iterative side of things, hence the near linear relationship of time per cell. However, I suspect that more iterations are needed to converge bigger grids. The picture is far from simple though, since you need those iterations in any event to handle the non-linear nature of the problem ... Due to the asymptotic nature of convergence, it gets quite difficult to define a clearcut "point of convergence" if you're trying to benchmark grids of varying size.
  Reply With Quote

Old   March 28, 2002, 03:51
Default Re: Double the # of cells->double the computing ti
  #10
sylvain
Guest
 
Posts: n/a
Nice job. I agree with your comments for the biggest grid.

  Reply With Quote

Old   March 29, 2002, 02:25
Default Re: Double the # of cells->double the computing ti
  #11
Neale
Guest
 
Posts: n/a
Most traditional iterative solvers are order N log (N) or maybe even order N^2. Coupled algebraic multigrid is order N, i.e., scales linearly. Unfortunately you have to assemble the matrix coefficients to give to the iterative solver, this step may not scale linearly, although, as you point out, with decent data structure design it might happen. With the exception of CFX-TASClfow and CFX-5, most commerical CFD codes use older order N^2 or NlogN algorithms.

Neale.

  Reply With Quote

Old   April 1, 2002, 11:44
Default Re: Double the # of cells->double the computing ti
  #12
Clif Upton
Guest
 
Posts: n/a
I am not clear, what solving a linear system of equations with O(N) or whatever, has to do with solving a non-linear system of coupled equations? Which is what a NS solver is. Of course, solving the linear subsystems faster during the non-linear iterative process helps to reduce overall time, but it has nothing to do with converging the solution faster, i.e. having fewer global iterations.

I must be missing something...
  Reply With Quote

Old   April 2, 2002, 20:43
Default Re: Double the # of cells->double the computing ti
  #13
Neale
Guest
 
Posts: n/a
In their discretised form (on a discrete mesh) the Navier Stokes equations are a linear system of equations, which you throw at a linear solver. Every implicit finite volume scheme in the world does this.

Fewer global iterations is had by proper coupling, i.e. solving for u,v,w,p at the same time, in the same linear set of equations. This requires careful linearisation of the Navier Stokes equations. A better linear solver can reduce the number of global iterations as well. Simply turn off multigrid and only use a Jacobi or ILU smoother. It will take longer (i.e., more global iterations).

Neale.

  Reply With Quote

Old   April 5, 2002, 18:13
Default Re: Double the # of cells->double the computing ti
  #14
Clif Upton
Guest
 
Posts: n/a
Neal - NS is a non-linear a coupled set of equations. You are correct saying that during iterative solution process at each iteration one solves a linear(ized) and often uncoupled set of equations. Coupling/uncoupling has very little to do with linearity/linearization. I agree that maintaining coupling similar to the original equations is good and may lead to fewer global iterations. The second parameter their is non-linearity.

So how a better linear solver would lead to few global iterations is not clear to me. It may save time doing those linear iterations, but not reduce the number of global iterations.
  Reply With Quote

Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Upgraded from Karmic Koala 9.10 to Lucid Lynx10.04.3 bookie56 OpenFOAM Installation 8 August 13, 2011 04:03
Orifice Plate with a fully developed flow - Problems with convergence jonmec OpenFOAM Running, Solving & CFD 3 July 28, 2011 05:24
directMapped problem panda60 OpenFOAM Bugs 4 July 8, 2010 10:23
Missing math.h header Travis FLUENT 4 January 15, 2009 11:48
AMG versus ICCG msrinath80 OpenFOAM Running, Solving & CFD 2 November 7, 2006 15:15


All times are GMT -4. The time now is 14:58.