|
[Sponsors] |
February 11, 2015, 09:26 |
OpenFOAM on a Odroid C1 ARM cluster
|
#1 |
New Member
Thomas Schulz
Join Date: Jul 2014
Posts: 17
Rep Power: 11 |
Hello everyone ...
during my latest tests I came to a point where I had to use a lot of computational power (cases ran up to weeks). I wanted to investigate further and tried to find out something about power consumption and the power to watt ratio. I build myself a small Archlinux based ARM cluster (6 nodes) using the new Odroid C1 boards. They are now installed and the distribution runs fine. MPI test programms compiled using openMPI and MPICH. I am now to the point where I would like to run some testcases with OpenFOAM (my latest cases where build and run using version 2.2.2). I am using my laptop (x86 based with Xubuntu 14.04 installed) as a head node and data storage. The nodes all got a 16 GB SD card installed and are connected to a gigabit ethernet switch. A passwordless ssh login for my cluster user ('wolf' in this case) is possible from the head node to the slave nodes and the other way around too. If I issue the command: Code:
/opt/openmpi/bin/mpirun -mca plm_rsh_no_tree_spawn 1 -np 24 -hostfile hostfile.openmpi /opt/openfoam/OpenFOAM/OpenFOAM-2.2.2/bin/foamExec icoFoam -parallel -case /opt/cases/cavity hangs there. Nothing happens anymore ... no output, no error message (find the output up to that point at the end of this post). Logging into one of the nodes and start a "top" the cores are still under load but in the output there is nothing anymore. Does anyone can give me a hint on what might be wrong here? Anyone had a similar problem? Best, Thomas Code:
/*---------------------------------------------------------------------------*\ | ========= | | | \\ / F ield | OpenFOAM: The Open Source CFD Toolbox | | \\ / O peration | Version: 2.2.2 | | \\ / A nd | Web: www.OpenFOAM.org | | \\/ M anipulation | | \*---------------------------------------------------------------------------*/ Build : 2.2.2-9739c53ec43f Exec : icoFoam -parallel -case /opt/cases/cavity Date : Feb 11 2015 Time : 14:16:40 Host : "cluster02" PID : 2599 Case : /opt/cases/cavity nProcs : 24 Slaves : 23 ( "cluster02.2600" "cluster02.2601" "cluster02.2602" "cluster03.1912" "cluster03.1913" "cluster03.1914" "cluster03.1916" "cluster04.1874" "cluster04.1875" "cluster04.1876" "cluster04.1878" "cluster05.1987" "cluster05.1988" "cluster05.1989" "cluster05.1991" "cluster06.1847" "cluster06.1848" "cluster06.1849" "cluster06.1851" "cluster07.1750" "cluster07.1751" "cluster07.1752" "cluster07.1754" ) Pstream initialized with: floatTransfer : 0 nProcsSimpleSum : 0 commsType : nonBlocking polling iterations : 0 sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE). fileModificationChecking : Monitoring run-time modified files using timeStampMaster allowSystemOperations : Disallowing user-supplied system call operations // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // Create time Create mesh for time = 0 Reading transportProperties Reading field p Reading field U Reading/calculating face flux field phi Starting time loop Time = 0.005 Courant Number mean: 0 max: 0 DILUPBiCG: Solving for Ux, Initial residual = 1, Final residual = 5.52301e-06, No Iterations 14 DILUPBiCG: Solving for Uy, Initial residual = 0, Final residual = 0, No Iterations 0 DICPCG: Solving for p, Initial residual = 1, Final residual = 6.39255e-07, No Iterations 61 time step continuity errors : sum local = 4.27023e-09, global = -4.23516e-20, cumulative = -4.23516e-20 DICPCG: Solving for p, Initial residual = 0.523593, Final residual = 5.26347e-07, No Iterations 60 time step continuity errors : sum local = 5.83689e-09, global = 8.14773e-20, cumulative = 3.91256e-20 ExecutionTime = 0.43 s ClockTime = 1 s Time = 0.01 Courant Number mean: 0.0976806 max: 0.585722 DILUPBiCG: Solving for Ux, Initial residual = 0.148584, Final residual = 5.69895e-06, No Iterations 12 DILUPBiCG: Solving for Uy, Initial residual = 0.256618, Final residual = 9.01967e-06, No Iterations 12 DICPCG: Solving for p, Initial residual = 0.379084, Final residual = 6.6662e-07, No Iterations 59 time step continuity errors : sum local = 6.22843e-09, global = -1.23912e-19, cumulative = -8.4786e-20 DICPCG: Solving for p, Initial residual = 0.28687, Final residual = 8.18003e-07, No Iterations 58 time step continuity errors : sum local = 8.31983e-09, global = 8.18578e-19, cumulative = 7.33792e-19 ExecutionTime = 0.79 s ClockTime = 1 s Time = 0.015 Courant Number mean: 0.14468 max: 0.758278 DILUPBiCG: Solving for Ux, Initial residual = 0.0448619, Final residual = 3.99534e-06, No Iterations 11 DILUPBiCG: Solving for Uy, Initial residual = 0.0782148, Final residual = 5.47164e-06, No Iterations 11 |
|
February 11, 2015, 14:24 |
[Solved] ...
|
#2 |
New Member
Thomas Schulz
Join Date: Jul 2014
Posts: 17
Rep Power: 11 |
Hi everyone ...
this was obviously a version problem with openMPI. I used the most recent version from the webpage http://www.open-mpi.org/software/ompi/v1.8/ After reading a bit I tried the openMPI version 1.6.3 that was included in the ThirdParty archive for OpenFOAM 2.2.2 and ... problem solved! :-) Cases are running fine now ... anyway ... anybody who can tell me why the most recent version 1.8.4 of openMPI didn't work? What is the difference? I read something about "thread support"?? Best, Thomas |
|
February 11, 2015, 15:25 |
|
#3 | |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,975
Blog Entries: 45
Rep Power: 128 |
Greetings Thomas,
Quote:
Bruno |
||
February 12, 2015, 15:18 |
Cluster tuning/tweaking?
|
#4 |
New Member
Thomas Schulz
Join Date: Jul 2014
Posts: 17
Rep Power: 11 |
Hello Bruno,
now that the cluster is running and doing it's work I wonder if there are ways to tweak the performance? Are there ways to calculate the ideal processor / number of cells ratio? I think the bottleneck is the communication between the nodes and this slows the whole thing down considerably. I would like to get as much out of those little boards as possible. Best, Thomas |
|
February 14, 2015, 11:24 |
|
#5 |
Retired Super Moderator
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,975
Blog Entries: 45
Rep Power: 128 |
Quick answer: see my blog post Notes about running OpenFOAM in parallel
|
|
June 21, 2016, 22:20 |
Did it went well?
|
#6 |
New Member
Gabriel
Join Date: Jun 2016
Posts: 5
Rep Power: 9 |
Hi Thomas,
Could you share more information about your odroids Cluster, please? I just bought a pair of Odroid-C2 for testing and I am trying to figure out how good could be a small cluster of Odroids in comparison with a workstation of the same price. Best Regards, Gabriel |
|
June 22, 2016, 02:49 |
Project closed ... Odroids sold...
|
#7 |
New Member
Thomas Schulz
Join Date: Jul 2014
Posts: 17
Rep Power: 11 |
Dear Gabriel,
the project I was working on has been finished and closed. I already sold the Odroids and I am currently not into OpenFOAM at all. I am sorry to not being able to help you here. Best, Thomas |
|
|
|
Similar Threads | ||||
Thread | Thread Starter | Forum | Replies | Last Post |
openFOAM mpirun error on cluster | cheng1988sjtu | OpenFOAM Running, Solving & CFD | 1 | November 14, 2012 22:13 |
Openfoam installation on beowulf cluster | lalupp | OpenFOAM Installation | 2 | April 9, 2012 10:24 |
Modified OpenFOAM Forum Structure and New Mailing-List | pete | Site News & Announcements | 0 | June 29, 2009 05:56 |
OpenFOAM on cluster | markh83 | OpenFOAM Installation | 1 | October 17, 2008 19:09 |
Adventure of fisrst openfoam installation on Ubuntu 710 | jussi | OpenFOAM Installation | 0 | April 24, 2008 14:25 |