CFD Online Discussion Forums

CFD Online Discussion Forums (
-   Main CFD Forum (
-   -   cfd-hardware (

chris August 14, 2000 01:43

Hy, a question to the "state of the art". What hardware do you use for cfd in the moment ? chris

Steve Amphlett August 14, 2000 05:16

Re: cfd-hardware

I notice you asked "What hardware do you use", not "What is the best hardware to use", so my answer is just that, an answer rather than an opinion.

For 3D CFD (VECTIS), we use a variety of UNIX machines for pre-processing, solving and post-processing. At the present time, the quickest of our lot per CPU are our HP J-class machines (J5600 and J6000) and our Compaq DS20. For parallel jobs, we tend to use an 8 processor SGI box. We do have a beowulf system set up here that appears to be pretty quick too, but nobody trusts it enough yet to use it for real work.

For 1D CFD (WAVE), most of our work is now done on reasonably spec'd pentium boxes running NT.

Hope this is a useful data point.

- Steve

Rich E August 14, 2000 10:16

Re: cfd-hardware

We have a beowulf (8x800Mhz) that is about to go 'live' on large unsteady CFD problems. I would be interested to know what is causing the uncertainty with your cluster. Have you had 'issues'?


Steve Amphlett August 14, 2000 11:01

Re: cfd-hardware

No, nothing technical. Just inertia really.

John C. Chien August 14, 2000 14:51

Re: cfd-hardware
(1). Currently, I am using Sun/Ultra60 workstation. (2). Before that, two years ago, it was a HP/c200 (?) something like that. (3). There are other computers available, but I tend to use these two types of workstations. No big problems with commercial cfd codes.

Mehdi BEN ELHADJ August 15, 2000 07:09

Re: cfd-hardware
A powerful one is "Silicon Grphics" workstation, I use it now. Last time I used a sun "Spark 5" workstation.

Jonas Larsson August 15, 2000 07:20

Re: cfd-hardware
We use HP J-class workstations for pre and post-processing and Linux Clusters (high-end PCs runing RedHat) for large simulations. Nothing beats the price/performance of the Linux Cluster. We used to use HP V-class parallel compute-servers, but they have now largely been replaced by Linux Clusters.

Glenn Price August 15, 2000 13:32

Re: cfd-hardware
Jonas, do you ever run into memory limitations with the Linux cluster? I assume you're using 32-bit Intel or AMD processors, which can only access 2^32 ~ 4GB.

Any plans to switch to 64-bit or is there away around this that I don't know about.

The reason I ask is that we are looking at Linux cluster, but I thought we'd wait until the Sledgehammer and Itanium are out.

Jonas Larsson August 15, 2000 14:28

Re: cfd-hardware
True, Linux on x86 (Intel PIII etc) can only address up to maximum 4GB, and this is if you have done everything right... many Linux installations have 2 GB as the limit. This is one of the reasons why we don't do post and pre-processing on Linux PCs. When you run simulations though you always parallelize the case if it is big (ie demands a lot of memory). When you parallelize the case you most often split it up into smaller parts (domain decomposition), which easily fits into the memory of each Linux cluster CPU. In practise we have very seldom had any memory problems on our Linux clusters when running simulations. Parallel scaling vs. problem size often gives an optimum with something like 300 MB parts on each CPU. The only problem we've had is with an in-house code which is parallelized in a stupid way - you have to have a "mother node" to read in the case and this has to be able to hold the entire case in memory. The simple solution is to place the "mother node" on an HP workstation, which can address more than 4 GB, and place all compute-nodes on the Linux cluster.

We are about to benchmark an Itanium box and if it works out well we hope that Itanium machines will be an alternative to the HP J-class machines for pre and post processing.

Mike Clapp August 15, 2000 16:03

Re: cfd-hardware
For running CFDesign I use a dual processor Intel Petium 600 MHz with 512 Mbytes of RAM. As I can turnaround almost all the examples on our Website, at, overnight, I have not seen the need to invest in anything more expensive. I think the majority of our customers are now doing production models on PC's of this type of specification (although most have only one processor).



Glenn Price August 17, 2000 11:39

Re: cfd-hardware
Thanks for the info Jonas

clifford bradford August 18, 2000 15:37

Re: cfd-hardware
as CFDers we should have very little inertia ;-). be not afraid Beowulf is good for you like broccoli

Jim Forsythe August 28, 2000 23:40

Re: cfd-hardware
We couldn't be happier with our Linux cluster (USAF Academy). We have built it up to 64 processors and use the Government code, Cobalt (unstructured) It churns out full aircraft solutions (1.5-6 million cells) in a day to a few days. The cluster came from Paralogic ( Don't let interia hold you back. We have a 10:1 price performance over sgi's.


All times are GMT -4. The time now is 22:39.