|
[Sponsors] |
June 5, 2007, 14:19 |
Parallel Computing on Multi-Core Processors
|
#1 |
Guest
Posts: n/a
|
Hi Experts,
I see from some recent posts on here that multi-core processors are being used on CFX calculations now. Could someone clarify how this works in words understandable by someone that knows a lot about CFD, but not as much about modern computer architecture and parallel computing? Is this true parallel computing as would be done on many processors? Are PVM and MPI still both useable? If you have a cluster with 4 quadcore processors, do you really have 16 processors at work, or just 4 really quick processors? How does this work in terms of RAM allocation? One of the benefits of a 'standard' parallel job is that you can run bigger jobs as each processor has it's own RAM, is this advantage lost when using a quadcore? How about memory limitations due to the bus? Anyway, these are probably really basic questions and I know that some of you guys on here have all the answers! Don't all answer at once |
|
June 5, 2007, 15:45 |
Re: Parallel Computing on Multi-Core Processors
|
#2 |
Guest
Posts: n/a
|
Running CFX on multi-core processors works exactly as if each core were a separate processor. PVM and MPI still work, but there are distinctions to be made between local (i.e. all processors housed in same computer) and distributed (processors spread among multiple computers.) Each core still has it's own RAM, but this is of course limited by how much RAM is installed on each processor. You can still get the same benefit of increasing job size if you stack the multi-core processor with 2-4 Gb of RAM per core. As far as actual benefits and limitations due to the bus, see the posts from Joe and Stu from the past few days. The selling points for multi-core processing are vast, including that it allows for huge space, power, and cooling savings in cluster architecture.
|
|
June 5, 2007, 16:55 |
Re: Parallel Computing on Multi-Core Processors
|
#3 |
Guest
Posts: n/a
|
Dual core Conroe-arch Intel (Desktop and Server) and AMD (Opeteron) cpus scale virtually linearly with all combinations of single socket (desktop), dual socket (intel server and amd server) and quad socket (amd server).
The only caveats are: -Quad core intel chips (desktop and server) only allow you to utilise 3 of the 4 cores effectively. There are no quad core -AMD server chips available now and probably wont be until late 2007. Linking boxes via a simple gige interconnect gives virtually linear scaling for <24 cores in my experience. |
|
June 5, 2007, 16:58 |
Re: Parallel Computing on Multi-Core Processors
|
#4 |
Guest
Posts: n/a
|
Memory scaling is pretty linear too. If your problem took 4 GB to run on one core it will take 1 GB per core if you run on 4 etc.
|
|
June 5, 2007, 17:23 |
Re: Parallel Computing on Multi-Core Processors
|
#5 |
Guest
Posts: n/a
|
Thanks guys ... I knew you would come up with the goods!
|
|
June 6, 2007, 18:58 |
Re: Parallel Computing on Multi-Core Processors
|
#6 |
Guest
Posts: n/a
|
Could you give more information on communication protocols for HPC computing?
What do most people use? Ethernet, Gigabit ethernet, Myrinet, Infiniband ... something else? |
|
June 7, 2007, 16:54 |
Re: Parallel Computing on Multi-Core Processors
|
#7 |
Guest
Posts: n/a
|
Gige is perfectly fine for less than 32 cores.
If you account for the outrageous costs of exotic interconnects gige is still cheaper over 64 cores. |
|
Thread Tools | Search this Thread |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
solving a conduction problem in FLUENT using UDF | Avin2407 | Fluent UDF and Scheme Programming | 1 | March 13, 2015 03:02 |
Superlinear speedup in OpenFOAM 13 | msrinath80 | OpenFOAM Running, Solving & CFD | 18 | March 3, 2015 06:36 |
Parallel computing quad core | Prad | Main CFD Forum | 13 | February 9, 2009 15:28 |
Find neighbours across processors in parallel computing | xiao | OpenFOAM Running, Solving & CFD | 1 | December 15, 2008 13:14 |
Parallel computing on dual core | Fabio | FLUENT | 3 | July 8, 2008 06:28 |