|
[Sponsors] |
August 31, 2020, 18:35 |
HPC server for a start-up
|
#1 |
New Member
Killian
Join Date: Nov 2017
Posts: 26
Rep Power: 8 |
Hello everyone,
We need to build a small HPC server for our start-up to perform (mostly) CFD calculations. I will try to be as clear as possible: Our needs:
CPU: 2x AMD EPYC 7302 16C -> why ? good price/value, good total memory bandwidth, 128MB L3 cache and a reasonnable TDP GPU: RTX 2070 why ? -> 8Go of VRAM and prices dropped with this Gen as new RTX 3xxx are coming RAM: 128Go (64Go/CPU) 3200MHz but how to allocate it? 2x-8x8Go would be a good choice? Storage: SSD (no QLC) would be better, but how much space should we allow? OS: I read CentOS is a good choice for an HPC server Server hardware: rack, but what model would you recommend? Something like Lenovo Thinksystem SR665? Thank you for your answers! |
|
September 1, 2020, 14:14 |
|
#2 |
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,397
Rep Power: 46 |
How do you intend to access this PC? Do you need both users working in a graphical environment at the same time? Is this just the first one of many compute nodes to come, hence 4 HPC packs?
|
|
September 1, 2020, 16:48 |
|
#3 | |||
New Member
Killian
Join Date: Nov 2017
Posts: 26
Rep Power: 8 |
Hello flotus1,
First, thank you for all your answers on this forum, I learned a lot from you! Quote:
Quote:
Quote:
Thanks! |
||||
September 1, 2020, 17:51 |
|
#4 | |||||||||
Super Moderator
Alex
Join Date: Jun 2012
Location: Germany
Posts: 3,397
Rep Power: 46 |
That's way outside of my area of expertise, so I might be completely wrong here. But you should definitely check whether Nvidias consumer GPUs allow several sessions at the same time. The keyword here might be SR-IOV. But again, total shot in the dark on my side
Anyway, back to basics: Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
Quote:
|
||||||||||
September 2, 2020, 01:47 |
64 Cores workstation
|
#5 |
Member
|
Hi Killian
I would suggest: -AMD Epyc Rome procs because of best value for money in your price range, -16x8GB 3.2GHz DDR4 ECC, for maximum memory bandwidth. That fills each memory slot with the smallest 3.2GHz DDR4 module. 5 M cells is a small model, even when distributed over many cores, 128 GB should be sufficient. -Run your DES/LES in the cloud if possible. STAR-CCM+ scale well down to roughly 10,000 cells per core, I would hope Fluent is fairly similar, so you should be running your 5M cells LES sim on 500 cores, for a speed up x10 compare to your workstation. -CentOS is a good option for CFD HPC. -Not sure about sharing GPU, last time i looked into, Nvidia had a fantastic and incredibly expensive solution and it only worked through a VM/container. You should be running most of your simulation in batch mode. Using a script is probably ok for 2 users. For more users, a batch job manager is probably a good idea. PBS works very well. You'll most probably have to manage access for interactive CFD work. Please have a look at our 64 Cores workstation: https://trampocfd.com/collections/wo...es-workstation If you like what you see, Please contact us through PM or contact form on our home page, and we’ll send you a questionnaire to make sure we address all your needs thoroughly. Best regards, Gui |
|
October 8, 2020, 15:48 |
|
#6 | ||||||
New Member
Killian
Join Date: Nov 2017
Posts: 26
Rep Power: 8 |
Quote:
Hm SR-IOV seems to be for multi-virtualization. We won't virtualize our server, it will be core Linux with multi-users for simulations. So, SR-IOV is not necessary isn't it? Quote:
Quote:
Quote:
Quote:
Quote:
I'm also really lost about the kind of storage we should put in the server.. SSD vs HDDs? 500GB, 1TB? |
|||||||
October 8, 2020, 15:58 |
|
#7 | |
New Member
Killian
Join Date: Nov 2017
Posts: 26
Rep Power: 8 |
Quote:
Thank you for your answer. Unfortunately, we can't buy your products as we need EU warranty. It would be too complicated to buy from Australia. Do you think we should build the server on a tower case or a rack is better? I see you only put a 500GB NVMe for primary partition and 1TB SSD for main partition on your CFD workstation, is it really enough? I also have zero idea about sharing GPUs. I mean the server will run on Linux which manages multiple users. So if userA runs a CFD-post, will the userB be capable of running a CFD-post render too? Best regards |
||
October 8, 2020, 20:13 |
|
#8 |
Member
|
Hi Killian
1/ Our warranty is the parts manufacturers international warranty, the manufacturer ships a new part to you following a remote diagnostic that shows a defective part. Your location makes no difference. 2/ rack VS tower: do you have rack space? otherwise tower is the default. 3/SSD capacity: that totaly depends on your usage. How much data will your largest simulation produce? be carefull if your run transient simulation and generate data for making videos. you could be generating TBs of data with a 10M cells mesh. 4/I can't help with shared usage on a standard CPU. I looked at Nvidia Grid for our cloud solution but never went ahead with it. best regards gui@trampoCFD |
|
Thread Tools | Search this Thread |
Display Modes | |
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
TimeVaryingMappedFixedValue | irishdave | OpenFOAM Running, Solving & CFD | 32 | June 16, 2021 07:55 |
Problem in running ICEM grid in Openfoam | Tarak | OpenFOAM | 6 | September 9, 2011 18:51 |
on the definition of Start in shell autoignition | mepgzzi | Siemens | 0 | June 18, 2008 11:33 |
CFD Online Running on a New Server | CFD Online Team | Main CFD Forum | 2 | November 30, 2007 17:58 |
Lets start the public domain CFD-Project! | Heinz Wilkening | Main CFD Forum | 3 | March 11, 1999 23:55 |