CFD Online Logo CFD Online URL
Home > Forums > General Forums > Hardware

MPI code on multiple nodes, scalability and best practice

Register Blogs Members List Search Today's Posts Mark Forums Read

LinkBack Thread Tools Search this Thread Display Modes
Old   October 7, 2014, 05:07
Question MPI code on multiple nodes, scalability and best practice
Senior Member
Tom-Robin Teschner
Join Date: Dec 2011
Location: Cranfield, UK
Posts: 202
Rep Power: 15
t.teschner is on a distinguished road
Lets assume I have a code which shows a good scalability when I run it on a cluster with 32 cores, all in one node.
Now I move to a different cluster which only offers me 16 cores per node, so I schedule a job with 16 cores on 2 nodes. Or, if 8 cores is maximum, I select 4 nodes etc.

I worked with a code in the past that crashed when I was using several nodes because it assumes that all cores would have access to the same memory space, now that I am on two different nodes, I have two different memory spaces and hence the crash. The only workaround was to copy the data of every timestep back to the home directory, assemble it so that every processor could read the data they needed but that was slowing the code down dramatically.

So how do I get data at the inter processor boundaries from the neighbour processor then? What is the best-practice approach in that case? And does the usage of multiple nodes affect the scalability compared to a single node job (assuming the the cores are equal, like in the example above)?
t.teschner is offline   Reply With Quote


cluster computing, mpi, nodes

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On

All times are GMT -4. The time now is 22:39.