|
[Sponsors] |
September 24, 2021, 15:19 |
Server configuration suggestion (CFD, FEM)
|
#1 |
Member
Join Date: Mar 2019
Posts: 31
Rep Power: 7 |
Hi,
can you suggest a server configuration optimized for finite volume method (openfoam, fluent) and finite element method (comsol) simulations? 3-4 people have to work on this server simultaneously, it is better to maximize physical CPU or virtual CPU? We don't need a very good GPU for graphic. the budget is around 35,000-45,000 € |
|
September 24, 2021, 15:48 |
|
#2 | |
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,399
Rep Power: 46 |
That's not a lot of information to base a hardware recommendation on
Comsol is the difficult part here. Knowing what type of simulation and problem size you want to run is crucial. They have a write-up here: https://www.comsol.de/support/knowledgebase/866 Also make sure to follow some of the links there for further reading, e.g. https://www.comsol.com/blogs/much-me...-comsol-models Some of the info in there is no longer relevant, in particular the part about memory population affecting memory frequency. But the rest sounds quite reasonable, and should be a good starting point to narrow things down. You might even end up with a heterogeneous cluster: one node specialized for (large) Comsol runs, and more cost-efficient nodes for the rest. Quote:
|
||
September 27, 2021, 05:30 |
|
#3 | |
Member
Join Date: Mar 2019
Posts: 31
Rep Power: 7 |
Quote:
I am not the one using Comsol but I know they are running compressible flow in porous media simulation. With Openfoam I am running a steady-state/transient incompressible with boussinesq approximation fluid-flow simulation, with a mesh of 10 million elements. Now we are using a machine with 256gb RAM, 2 psychical processors, 36 (2X18) cores and 72 logical processors/threads; with HDD. my first question is when I decompose an OpenFoam simulation in X processors, in using X cores or X logical processors ? As you said it is better so use X cores and 1 thread per core. as for ram it is better to have, as an example, 8X64 gb or 16X32 gb? To help you dimensionize the server and understand the computational load, I can say that with had no problem running 1 openfoam simulation (16 processors) and 1 comsol simulation (I think 8-12 processors); but when a third person logged to run another simulation with 12 Millions element (I don't know how many processors) the server started slowing down. I don't know if these information can help you, I have no idea how to select the optimal configuration. If you can give me an initial configuration then I can ask other person in my institution how to improve the configuration and it is needed to modify something to insert it in the network. |
||
September 28, 2021, 06:15 |
|
#4 | ||||
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,399
Rep Power: 46 |
Quote:
"should" is the caveat here. That's why to this day, the recommendation to just disable SMT on the bios level is still around. It's about load balancing. A parallel mesher and solver will attempt to distribute work units of similar weight to each process. And usually all processes have to wait at some point until the last process has finished its task. Putting two instead of one work unit on a physical core will break all assumptions for load balancing. This core will take significantly longer than all cores with only one work unit, which drastically increases overall run time. Even in a best-case scenario -which we don't have for CFD workloads- SMT2 only increases performance of a core by ~30%. Which is never enough to offset twice the amount of work. Quote:
Quote:
1) Too many tasks led to oversubscribing cores, see above 2) System ran out of physical memory. Swap needs to be avoided at all cost, i.e. make sure you have enough memory on each node. And maybe make nodes exclusive to a single job, or enforce memory limits via a queuing system. 3) Running out of memory bandwidth or other shared CPU resources like L3 cache General recommendations for CFD hardware [WIP] Not much you can do about the last one, other than not allowing everyone to run a simulation on the same node Quote:
Depending on what the actual requirements from your FEM-colleagues are for Comsol, and which priority it has, you have a few options: 1) just use the same node for Comsol too 2) bump up the memory on one of these nodes if they need more than 256GB 3) possibly in addition to 2: get a RAID0 array of some of the fastest NVMe-SSDs for the Comsol node if they need even more "memory" for out-of-core simulations 4) Get faster CPUs for the Comsol node. Not all of their solver variants can run distributed parallel (a single simulation running on several compute nodes), so having the fastest CPU available is the only option to speed things up further. edit: forgot about 5, but really, the possibilities are endless) use the machine you currently have for Comsol exclusively, maybe with a memory upgrade |
|||||
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
CFD Online Back on Main Server | pete | Site News & Announcements | 0 | March 31, 2011 18:14 |
CFD Online Running on Backup Server | pete | Site News & Announcements | 0 | March 25, 2011 14:51 |
CFD Online Server Up Again | Jonas Larsson | Main CFD Forum | 2 | November 27, 2006 17:02 |
CFD Online Server Move | Jonas Larsson | Main CFD Forum | 0 | January 15, 2005 07:42 |
CFD to FEM | V. Worbington | Main CFD Forum | 1 | July 21, 1998 15:40 |