|
[Sponsors] |
June 2, 2000, 10:12 |
Why STARCD slower than Fluent and CFX
|
#1 |
Guest
Posts: n/a
|
STARCD is very slow when you have tetra mesh. WHY???
|
|
June 2, 2000, 11:04 |
Re: Why STARCD slower than Fluent and CFX
|
#2 |
Guest
Posts: n/a
|
(1). In doing what? (2). Are you getting no solution?, poor solution?, good solution?, or very accurate solution? (3). If you are getting identical solutions, and wish to get faster solutions, then you can try other options. (4). You can have a very large, very fast code. You can also turn it into very small, very slow code. (5). In my opinion, a general purpose code (or groups of codes) is by definition a very large and very slow code. It is possible to speed up the code, but the size of the code will also grow. In 3-D, you can only run a small problem with a very large code. (6). So, when someone says his code is faster, make sure that the size of the code is not a factor of two larger. Well, you can't have everything, can you?
|
|
June 2, 2000, 12:32 |
Re: Why STARCD slower than Fluent and CFX
|
#3 |
Guest
Posts: n/a
|
John makes a couple of good points (make sure you compare similar methods and solutions), but the comments on large vs small codes and the relationship of code size to speed are not very useful.
First, it isn't clear what is meant by "code size", but the most useful metric is run time memory usage. And the general rule is that memory usage can be traded off for compute time. But I have seen many exceptions to that rule (large programs that are nevertheless extremely slow). For the purposes of comparing CFD codes, if both codes fit into RAM on your target platform when running your target model, and all other factors are comparable, then size is not a consideration. |
|
June 2, 2000, 13:38 |
Re: Why STARCD slower than Fluent and CFX
|
#4 |
Guest
Posts: n/a
|
(1). Thank you. (2). The code size is the total memory required to solve a problem with a given mesh size. (3). The instruction part of the code must reside in the memory, it is faster when the instructions are in the RAM. It is slower when the hard disk is used. (4). The size of the instruction part is the compiled and linked program. So, its execution speed depends on how the program is written. (5). The data storage part of the code is the place where variables, temporary parameters or variables, constants, etc are stored in the memory, RAM or hard drive. (6). So, the code size to solve a given problem include the instruction part and the data storage part. (7). One can speed up the execution of a code by storing essential variables and also the derived variables, if the derived variables are used more than once. This will eliminate the time in repeated calculation of the same variable. (8). The particular algorithm used also will affect the complexity of the instruction part of the code, as well as the number of essential variables required per cell or grid point. In general, the implicit, the coupled, and multi-level methods will require much more essential data storage space (RAM or hard drive). (9). In general, it is a good idea to use incompressible code for low speed flow. Using a transient compressible code for low speed flow (low Mach number) calculations can be a very slow business. (10). The execution speed of a code also depends on how the code is used. If one try to write to the screen often, or save the results to the hard disk often , then it will slow down the execution. This is because the I/O to screen and hard disk is very slow relative to the RAM access. To store a complete set of results to hard disk sometimes can take a long time. And if one has several ASCII formatted file to save, it will take even more time during the program execution. So, the intermediate I/O also has a great impact on the speed of execution. Remember that for 3-D problem, the result file can be easily several hundreds of mega bytes size. (11). One final comment, that is, the time per iteration is not universal, because some methods take a much longer time to do one iteration, but require much fewer iterations to converge.
|
|
June 4, 2000, 21:56 |
Re: Why STARCD slower than Fluent and CFX
|
#5 |
Guest
Posts: n/a
|
I believe the question is "well posed". Hsu is comparing apples with apples: both STAR-CD and FLUENT (and probably CFX, although I do not know much about it) are full featured finite volume based CFD codes, built roughly on similar principles. Hsu is not comparing STAR-CD with some handcrafted special purpose code. So the issues of small/large or general/special purpose codes do not apply.
I believe the answer is the various design choices made by each code to extract the best performance. STAR-CD may have been optimised to work efficiently with mostly structured meshes, with isolated (or limited number of) tetrahedra. So it would outperform other codes, which may be designed with a general mesh in mind to begin with, on these types of meshes, and underperform where the tets predominate. This is not an unreasonable or uncommon situation. If you will forgive a frivioulous analagy, you cannot expect a Ferrari to beat a Jeep on a rocky road. It was designed for a different application. To push the analogy a little further (look out for the analogy police!) a bicyle will beat a Jeep on a gridlocked city street. This is the similar to a small special purpose code being faster on a specific problem than a large general purpose code. So what!? So long as you have a wide open street (read plenty of memory) a general purpose code is preferable. |
|
June 5, 2000, 00:19 |
Re: Why STARCD slower than Fluent and CFX
|
#6 |
Guest
Posts: n/a
|
Are you talking about general SIMPLE algorithm? "slow" means too many iterations for convergence, or too much cpu time per iteration. I used to use StarCD a lot, and only learned a little on FLUENT. This makes me wonder why one could be very different from its brother. I used to think this type of solution algorithm is implicit in each iteration. Element number should not make difference in iteration number for convergence. It is the flow detail that matters in driving convergence. Usually, smaller element dimension would take more iterations to converge due to more detail present in flow field. I did some full tetra run for element size from several mm to 1 m, of course I lost most detail of the flow which was not of concern. Around 1.5 million elements would take near 200 iterations for default convergence, in steady incompressible isothermal flow, in StarCD.
If you are talking about more CPU time per iteration, it is possible that StarCD still useing old method to solve the discretized linear system for momentum and pressure equations. Or faster algorithm has been developed in other commercial codes. |
|
June 5, 2000, 08:59 |
Re: Why STARCD slower than Fluent and CFX
|
#7 |
Guest
Posts: n/a
|
I work with STAR-CD ver. 3.100 and CFX 5.3. Both work with unstructured meshes. The solver from CFX is faster as it is a fully coupled multigrid solver where STAR isn't (yet). This results in a speedup of a factor of 10 or more in our case. Poeple from STAR are working now on a coupled multigrid solver so that should help speed up STAR.
Regards, Bart Prast |
|
Thread Tools | Search this Thread |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
Comparison of fluent and CFX for turbomachinery | Far | CFX | 52 | December 26, 2014 19:11 |
CFX or Fluent for Turbo machinery ? | Far | FLUENT | 3 | May 27, 2011 04:02 |
Fluent Vs CFX, density and pressure | Omer | CFX | 9 | June 28, 2007 05:13 |
Comparison among CFX, STARCD, FLUENT, etc ? | Jihwan | Main CFD Forum | 13 | October 12, 2004 13:02 |
CFX, CFDRC, Fluent, StarCD, Phonetics | Jen | Main CFD Forum | 8 | April 8, 2003 11:46 |