CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM > OpenFOAM Installation

parallel performance on BX900

Register Blogs Members List Search Today's Posts Mark Forums Read

Reply
 
LinkBack Thread Tools Search this Thread Display Modes
Old   December 20, 2010, 00:38
Default parallel performance on BX900
  #1
New Member
 
Ken UZAWA
Join Date: Mar 2010
Location: 4-6-1 KOMABA MEGURO-KU, TOKYO 153-8505, JAPAN
Posts: 2
Rep Power: 0
uzawa is on a distinguished road
Dear All,

OpenFOAM v1.6 has been successfully installed on a supercomputer at Japan Atomic Energy Agency. The supercomputer system is a hybrid system consisting of three computational server systems, i.e., (I) Large-scale Parallel Computation Unit, (II) Application Development Unit for the Next Generation Supercomputer, and (III) SMP Server. The Large-scale Parallel Computation Unit uses PRIMERGY BX900, which is the Fujitsu's latest blade server with 2134 nodes (4268 CPUs, 17072 cores) connected using the latest InfiniBand QDR high-speed interconnect technology. The details of the Large-scale Parallel Computation Unit are as follows.

CPU: Intel Xeon processor X5570 (2.93GHz)×2CPU
level one cache(L1):256K
secondary cache(L2):1MB
third-level cache(L3):8MB
number of cores: 4 cores/CPU
node communication performance:8GB/s
OS: Red Hat Enterprise Linux 5

Based on the LINPACK performance benchmark, the supercomputer achieved performance of 186.1 teraflops, which made it the fastest one in Japan based on the latest TOP500 list of supercomputers at the date of this October.

I would like to report parallel performance up to 256 cores on the Large-scale Parallel Computation Unit. I thought it will be a good idea to share it for supercomputer users in any form. I hope this information helps you if only a little.
Here, a simplified three-dimensional dam break problem is chosen as a test example and the two-phase flow is solved an interFoam solver. Numerical conditions are same in experimental settings as used in Martin[1] and Koshizuka[2].
[1] J.C. Martin and W.J. Moyce, ”PartIV. An experimental study of the collapse of liquid columns on a rigid horizontal plane ”, Phil. Trans. R. Soc. Lond. A, 244, 312-324 (1952).
[2] S. Koshizuka, H. Tamako, Y. Oka, "A particle method for incompressible viscous flow with fluid fragmentation", Computational Fluid Mechanics Journal, 113, 134-147 (1995).

It is found that it scales well for up to 128 cores, yet maintains excellent performance levels even on 256 cores. (Please see the attached file for details.)
Parallel performance up to full cores (17072 cores) will be reported later.
Attached Images
File Type: jpg speedup.jpg (46.6 KB, 48 views)
uzawa is offline   Reply With Quote

Old   December 20, 2010, 01:48
Default
  #2
Super Moderator
 
niklas's Avatar
 
Niklas Nordin
Join Date: Mar 2009
Location: Stockholm, Sweden
Posts: 693
Rep Power: 29
niklas will become famous soon enoughniklas will become famous soon enough
If you instead plot the numbers of cell per core, what would the numbers be?
I usually tries to go for approximately 50k cells / core, lower than that is not worth it
niklas is offline   Reply With Quote

Old   December 22, 2010, 04:29
Default
  #3
New Member
 
Ken UZAWA
Join Date: Mar 2010
Location: 4-6-1 KOMABA MEGURO-KU, TOKYO 153-8505, JAPAN
Posts: 2
Rep Power: 0
uzawa is on a distinguished road
Dear Niklas Nordin

Thank you very much for your interest in my work. I would be happy to try to answer your question.

Quote:
Originally Posted by niklas View Post
If you instead plot the numbers of cell per core, what would the numbers be?
I usually tries to go for approximately 50k cells / core, lower than that is not worth it
In this case, number of total cells is approximately 8 million. Consequently, up to 128 cores, this choice meets your requirement. As you indicated, I am planning to perform more simulations by increasing the number of cells from 8 millions to tens of millions. Thank you very much for pointing that out.
uzawa is offline   Reply With Quote

Old   September 5, 2011, 16:52
Default
  #4
Senior Member
 
lakeat's Avatar
 
Daniel WEI (老魏)
Join Date: Mar 2009
Location: Beijing, China
Posts: 689
Blog Entries: 9
Rep Power: 21
lakeat is on a distinguished road
Send a message via Skype™ to lakeat
Quote:
Parallel performance up to full cores (17072 cores) will be reported later.
Any updates?
__________________
~
Daniel WEI
-------------
Boeing Research & Technology - China
Beijing, China
Email
lakeat is offline   Reply With Quote

Reply

Thread Tools Search this Thread
Search this Thread:

Advanced Search
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are On
Refbacks are On


Similar Threads
Thread Thread Starter Forum Replies Last Post
Performance of GGI case in parallel hannes OpenFOAM Running, Solving & CFD 26 August 3, 2011 04:07
Parallel performance OpenFoam Vs Fluent prapanj Main CFD Forum 0 March 26, 2009 06:43
Performance of interFoam running in parallel hsieh OpenFOAM Running, Solving & CFD 8 September 14, 2006 10:15
ANSYS CFX 10.0 Parallel Performance for Windows XP Saturn CFX 4 August 13, 2006 13:27
Parallel Performance of Fluent Soheyl FLUENT 2 October 30, 2005 07:11


All times are GMT -4. The time now is 06:17.