CFD on PC (cluster)
I plan to adopt the Beowulf configuration to build our PC cluster for CFD computations. For the interconnect, our budget is limited to 100 mbps Fast Ethernet. I am thinking about using Pentium II. Could somebody please help me find anwers on the followings:
a) How does the clock speed (i.e. 233, 266, 333, and 450 MHz) relates with the computational speed ?
b) How does the cache size relates with the computational speed ?
c) The price of celeron is tempting. Is it wise to use it ?
I would of course welcome any suggestion on other issues that have to be considered. Thank you.
Re: CFD on PC (cluster)
Hi, 100MBit fast Ethernet gives reasonable interconnect performance. However, overall application performance is VERY dependent upon how fine-grained your code is. We tested ANGUS, a turbulent DNS combustion code, and the speedup was fairly disappointing. This is because it does a lot of short-message passing at processor domain boundaries. It was originally written for T3D/E architectures, where message-passing performance is very good.
However, I have run my parallel discrete vortex method code on a PC cluster and seen superlinear speedup. The interesting things was there was little difference between Myrinet and fast Ethernet, except Myrinet is about 10 times more expensive! A reason for the speedup is that I wrote the code knowing it would be run on a cluster. Use a few, long messages to amortize the message latency against the bandwidth to get good performance.
Anyway, in response to your questions:
(a) Scaling performance according clock speed is a fair indicator. However, PII 350/400/450MHz machines use a bus speed of 100MHz, improving memory access times noticably. Try and go for at least PII350MHz, and certainly not PII333MHz.
(b) That's a tricky one. Depends on how good your compiler is, and how long your 'vectors' are. Basically, the bigger the better.
(c) Only the new 128k cache versions. I would err on the side of caution and not overclock, as there have been reports of real OS crashes and floating-point errors. Notably, Linux is susceptible to crashing when Windows isn't ,because it's slower anyway ;-). There has been a long thread on the Extreme Linux mailing list on this topic.
Perhaps the most important issue is the compiler. Consider this, our tests show that for NAS Parallel Benchmarks Class W (single processor), a PII400 using the Portland Group FORTRAN compiler outperforms a 500MHz Alpha 21164 using g77/egcs! You only need to buy one or two compiler licenses, but a whole cluster of Alpha's is certainly more expensive than PII's. Of course, that's why we're using the Digital Visual FORTRAN compiler under NT on our cluster of eight Alphas. DVF is a whole lot faster than EGCS, but running under NT opens up a whole new set of issues! That's what we're working on :)
Good luck, hope this is useful. Our website should be up in a few days and has more info (follow the links on commodity supercomputing).
High Performance Computing Centre,
University of Southampton, England
Re: CFD on PC (cluster)
I am not very good in advising you as a specialist. But I'd been working with CFD packages before. My lab PC was a Pentium 200 Mhz with 128 MB ram. You need at least 1.5 GB of harddisk to install a massive simulation package and for saving simulated results, grid meshes and case files.
Generally, a P2-333 is very powerful. The best is having L2 cache on board; usually you can get 512 KB. A celeron chip does not usually has 0 cache and it means slower speed. That is what I know; correct me if I am wrong. Usually standalone PC works fine. If you want a server, make your server speed is very fast (high end server); or else if a number of users are sharing the same server, then it will slow down all the simulations. Furthermore, some package allow a user only. That is, if one user is running that package, the other will not be allowed to get access to it. This is so if this package has only one license per user.
I hope that this will help. So good luck.
|All times are GMT -4. The time now is 18:34.|