CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   OpenFOAM (https://www.cfd-online.com/Forums/openfoam/)
-   -   HPC for OpenFOAM (https://www.cfd-online.com/Forums/openfoam/235091-hpc-openfoam.html)

dimdia March 31, 2021 14:07

HPC for OpenFOAM
 
Hi everyone,

I am doing PhD on aerodynamic analysis of hybrid electric aircraft and I am trying to write a proposal for an HPC facility in my workplace. I will have to setup an HPC server to run OpenFOAM but the problem is that I am not an expert in this field and I don’t know from where to start.

The general idea is to have max 100-150 cores for 1-2 people running simulations.

Any guidelines or useful information?

Thanks in advance.

Roman1 April 1, 2021 07:46

HPC Sabalcore, Kaleidosim, YandexCloud (rent virtual mashine) etc or your own cluster.

chegdan April 6, 2021 10:13

Hey Dimdia,

First off, good luck on writing the proposal writing ... its a roller coaster. Since nobody has mentioned anything I will jump in.
  1. Look at hardware providers: get a quote from a hardware provider (Penguin Computing, Sabalcore, etc. ) and tell them your application and maybe think about the following points about architecture, node communication, and memory.
  2. Chip Architecture: Almost all CFD applications are memory bandwidth limited. Getting a chip architecture that has more memory channels will help you get more out of CFD. In this case the recent generations of AMD are your best choice. Azure and Oracle cloud have them if you want to test things out.
  3. Memory: Memory is important so for meshing a good rule of thumb is 2GB per million cells of your simulation. think about this from a global perspective first and think about the amount of memory per socket and core that are running MPI instances.
  4. Storage: If you have the money to use SSD on each node, go for it. Also, it's nice to have a storage node on your cluster and data retention policies to prevent data hording on your actual compute. Force users to move data to the backup storage drive and keep your local nodes and home directories as small as possible. If you do this you can use SSD very effectively.
  5. Cells/Core: Again, this is kind of knowing how big your problem will be like above. I've seen anywhere from 50k cells to 250k cell per core do just fine for cases. It really depends on your solver but overall that is a very loose recommendation.
  6. Node Communication: If you have more than two nodes you are going to need some high speed node communications, it's not simply a router you buy off the shelf. You will compile your MPI or use something like INTEL MPI that will work with your local drivers to get maximum speed.
  7. Lots of other unknowns: there are quite a few more after you have the hardware, but how you setup the environment is key. What Linux OS, what queueing system (PBS, PBS Pro, SLURM, etc.), and more.
  8. Hardware Location: These are noisy and require cooling for optimal performance. Places like sabalcore or Penguin computing will literally keep the hardware for you and you pay a fee to cool it. Some universities will even allow you to do this if you reach out to them.
  9. Crunch the Numbers: Look at HPC compute providers and see what their rates are. Compare the numbers between your quotes from hardware providers, your custom built one you come up with, and the cloud or bare-metal compute providers. Think about the lifetime of your cluster and if you will need to hire someone or contract someone to maintain it. Think about cooling. We have 3 clusters in our group and it is great to have complete and quick access and really only think about electricity costs (for now). It stings when you need to replace things.

There are many things to think about and it is an incredible task to do it right so It's good to at least get a professional quote, wherever you are. if you're in the US, there are quite a few. Best of Luck.


All times are GMT -4. The time now is 07:10.