CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Forums > Software User Forums > OpenFOAM

Discussion thread on how to install and use RapidCFD

Register Blogs Members List Search Today's Posts Mark Forums Read

Like Tree3Likes

Reply
 
LinkBack Thread Tools Display Modes
Old   July 26, 2016, 21:30
Default Single thread running is ok, but parallel running still stuck
  #21
New Member
 
Pin Lyu
Join Date: Mar 2016
Location: Harbin
Posts: 6
Rep Power: 3
tripodtwostep is on a distinguished road
Well, I abandoned Quadro 4000 (cc2.0) and found a Tesla K10 (cc 3.0) to test.
Single thread running(gpu) is Ok, about 4x faster than single thread cpu. (Better performance than cudasolver in foam-extend 3.2)
However when I compile openmpi1.8.4 in thirdpary folder and try 'mpirun -np 2 pisoFoam -parallel', error occurs:
Code:
/*---------------------------------------------------------------------------*\
| RapidCFD by simFlow (sim-flow.com)                                          |
\*---------------------------------------------------------------------------*/
Build  : dev-9fc614f4e816
Exec   : pisoFoam -parallel
Date   : Jul 27 2016
Time   : 08:44:37
Host   : "asus-Z10PA-D8-Series"
PID    : 16766
Case   : /home/asus/RapidCFD/asus-dev/grid_1000x1000/cavity-rcfd-2
nProcs : 2
Slaves : 1("asus-Z10PA-D8-Series.16767")
Pstream initialized with:
    floatTransfer      : 0
    nProcsSimpleSum    : 0
    commsType          : nonBlocking
    polling iterations : 0
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster
allowSystemOperations : Allowing user-supplied system call operations

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time

Create mesh for time = 0

terminate called after throwing an instance of 'thrust::system::system_error'
  what():  function_attributes(): after cudaFuncGetAttributes: invalid device function
[asus-Z10PA-D8-Series:16767] *** Process received signal ***
[asus-Z10PA-D8-Series:16767] Signal: Aborted (6)
[asus-Z10PA-D8-Series:16767] Signal code:  (-6)
[asus-Z10PA-D8-Series:16767] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x36cb0)[0x7ffaa3822cb0]
[asus-Z10PA-D8-Series:16767] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0x37)[0x7ffaa3822c37]
[asus-Z10PA-D8-Series:16767] [ 2] /lib/x86_64-linux-gnu/libc.so.6(abort+0x148)[0x7ffaa3826028]
[asus-Z10PA-D8-Series:16767] [ 3] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(_ZN9__gnu_cxx27__verbose_terminate_handlerEv+0x155)[0x7ffaa412d535]
[asus-Z10PA-D8-Series:16767] [ 4] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x5e6d6)[0x7ffaa412b6d6]
[asus-Z10PA-D8-Series:16767] [ 5] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x5e703)[0x7ffaa412b703]
[asus-Z10PA-D8-Series:16767] [ 6] /usr/lib/x86_64-linux-gnu/libstdc++.so.6(+0x5e922)[0x7ffaa412b922]
[asus-Z10PA-D8-Series:16767] [ 7] pisoFoam(_ZN6thrust6system4cuda6detail5bulk_6detail14throw_on_errorE9cudaErrorPKc+0x55)[0x452995]
[asus-Z10PA-D8-Series:16767] [ 8] /home/asus/RapidCFD/RapidCFD-dev/platforms/linux64NvccDPOpt/lib/libincompressibleRASModels.so(_ZZN6thrust6system4cuda6detail10for_each_nINS2_3tagENS_10device_ptrIiEEmNS_6detail23device_generate_functorINS7_12fill_functorIiEEEEEET0_RNS2_16execution_policyIT_EESC_T1_T2_EN10workaround13parallel_pathERNSD_IS4_EES6_mSB_+0x5e)[0x7ffabd3e1a9e]
[asus-Z10PA-D8-Series:16767] [ 9] /home/asus/RapidCFD/RapidCFD-dev/platforms/linux64NvccDPOpt/lib/libOpenFOAM.so(_ZN4Foam7gpuListIiEC1Ei+0x92)[0x7ffaa4e50912]
[asus-Z10PA-D8-Series:16767] [10] /home/asus/RapidCFD/RapidCFD-dev/platforms/linux64NvccDPOpt/lib/libOpenFOAM.so(_ZN4Foam8polyMeshC1ERKNS_8IOobjectE+0x426)[0x7ffaa5025a46]
[asus-Z10PA-D8-Series:16767] [11] /home/asus/RapidCFD/RapidCFD-dev/platforms/linux64NvccDPOpt/lib/libfiniteVolume.so(_ZN4Foam6fvMeshC2ERKNS_8IOobjectE+0x19)[0x7ffaa9769389]
[asus-Z10PA-D8-Series:16767] [12] pisoFoam[0x447d9d]
[asus-Z10PA-D8-Series:16767] [13] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf5)[0x7ffaa380df45]
[asus-Z10PA-D8-Series:16767] [14] pisoFoam[0x449fb5]
[asus-Z10PA-D8-Series:16767] *** End of error message ***
--------------------------------------------------------------------------
mpirun noticed that process rank 1 with PID 16767 on node asus-Z10PA-D8-Series exited on signal 6 (Aborted).
--------------------------------------------------------------------------
I set np=2 because K10 has two GK104 GPU inside. And through Nvidia's performance monitoring tool NVVP, I can see only 1 GK104 is running in single thread test.
Do anybody know how to use multi-gpu in a node, or how to use mpi in multi gpu-node?


Quote:
Originally Posted by mesh-monkey View Post
I get a similar issue - I believe it is related to the cuda compute capability (CC, CUDA_ARCH or sm_xx). RapidCFD by default is designed to run using CC 3.0 - mostly cards with Kepler chips (see listing here). Your Quadro 4000, like my Quadro 6000, has a Fermi chip and is CC 2.0.

There was a previous pull request to adjust the code to make it work, but appears to have been abandoned.

The issue comes down to the difference in how CC 3.0 handles textures objects. The file in question is src/OpenFOAM/containers/Lists/gpuList/textures.H

I have tried unsuccessfully to modify the code to make it work, however I'm well over my head programming-wise.
Any cuda-programming geniuses around that could help?

Thanks, Tom
tripodtwostep is offline   Reply With Quote

Old   August 29, 2016, 12:41
Default Running slow on Titan Black
  #22
New Member
 
Per Jørgensen
Join Date: Mar 2012
Posts: 17
Rep Power: 7
perjorgen is on a distinguished road
Hi,

I have compiled rapidCFD and are able to run the (modified) tutorials...
But they are 10 times slower on a Titan Black than running single threaded on the CPU :-(
Could I have accidentally set on some debugging or am I missing some configuration of the GPU or maybe I modified the tutorials wrong... I tried pimpleFoam/pitzDaily and icoFoam/cavity

Thanks in advance

Best regards,

Per
perjorgen is offline   Reply With Quote

Old   September 9, 2016, 19:20
Default
  #23
Super Moderator
 
Bruno Santos
Join Date: Mar 2009
Location: Lisbon, Portugal
Posts: 10,125
Blog Entries: 39
Rep Power: 110
wyldckat is a glorious beacon of lightwyldckat is a glorious beacon of lightwyldckat is a glorious beacon of lightwyldckat is a glorious beacon of lightwyldckat is a glorious beacon of lightwyldckat is a glorious beacon of light
Quote:
Originally Posted by perjorgen View Post
I tried pimpleFoam/pitzDaily and icoFoam/cavity
Quick answer: When it comes to testing with many cores/streams/threads, you need to think "big". For example, with traditional CPU based computing, the rule of thumb is roughly a minimum of 50000 cells per core/processor/subdomain.

I don't know how much RAM it equates to on a GPU, but the rule of thumb I use for CPUs is that 1 million cells (hexahedral-dominant mesh) requires 500 MB to 1 GB of RAM, depending on the solver and the number of equations.

Therefore, the default cavity and pitzDaily cases have 100 or 1000 cells... that would be like using a shotgun (the Titan) to kill an ant (the small tutorial cases) .

So, with a Titan GPU, you should aim to something fairly bigger than those small tutorial cases.
nnunn likes this.
__________________

Last edited by wyldckat; September 9, 2016 at 19:21. Reason: fixed a typo
wyldckat is offline   Reply With Quote

Old   September 11, 2016, 12:47
Default
  #24
New Member
 
Join Date: Jun 2016
Location: Germany
Posts: 2
Rep Power: 0
LisaMarie is on a distinguished road
Hi,

I also tried to install RapidCFD in several ways as described here, but always get error massages...

At first I tried to install as told by the github wiki, but I got error messages like: (see also attached file log.make)
Code:
UOPwrite.dep:134: recipe for target 'Make/linux64NvccDPOpt/UOPwrite.o' failed
make: *** [Make/linux64NvccDPOpt/UOPwrite.o] Error 1
or
Code:
could not open file mpi.h for source file UOPwrite.C due to No such file or directory
When I patched the math_functions.hpp file for CUDA-7.5 as described by aee, my desktop was totally messed up (splitted in half etc)...
Quote:
Usage:
math_functions.hpp is found in /usr/local/cuda-7.5/targets/x86_64-linux/include/ (default install) or wherever you installed Cuda 7.5.

If in the default location, you will need root or sudo access for the following steps:
1) Place the patch file in the same directory.
2) Copy the original math_functions.hpp to math_functions.hpp.org and/or move it someplace safe.
2) execute
patch < Cuda7.5_math_functions.hpp.patch
I luckily did a system backup before, which I could restore... Afterwards I again looked at the log.make file from the first try and did not find an error message related to the cuda_math_functions, so patching the file would not work for me...


Like aee I tried to set the MPI to SYSTEMOPENMPI (from OPENMPI) as it was set in my OpenFoam installation (v3.0+), but then I get lots of output like: (see also attached file log.make_systemopenmpi)
Code:
wmakeLnInclude: linking include files to ./lnInclude
Making dependency list for source file UOPwrite.C
could not open file omp.h for source file UOPwrite.C due to No such file or directory
Can anybody please help me to install RapidCFD?

Some information about my setup:
  • Ubuntu 16.04 Desktop x64;
  • gcc (Ubuntu 5.4.0-6ubuntu1~16.04.2) 5.4.0 20160609
  • CUDA version 7.5
  • Quadro M4000M
  • OpenMPI 1.10.0 in folder /home/openmpi
Attached Files
File Type: zip log.make.txt.zip (30.6 KB, 0 views)
File Type: zip log.make_sytemopenmpi.zip (8.7 KB, 0 views)
LisaMarie is offline   Reply With Quote

Old   September 12, 2016, 14:03
Default
  #25
New Member
 
Join Date: Jun 2016
Location: Germany
Posts: 2
Rep Power: 0
LisaMarie is on a distinguished road
I was finally able to kind of compile RapidCFD. (foamInstallationTest failed)
Even though I had two error messages remaining, I could call solvers like icoFoam.
Code:
UOPwrite.dep:168: recipe for target 'Make/linux64NvccDPOptSYSTEMOPENMPI/UOPwrite.o' failed
make: *** [Make/linux64NvccDPOptSYSTEMOPENMPI/UOPwrite.o] Error 1
Code:
UIPread.dep:168: recipe for target 'Make/linux64NvccDPOptSYSTEMOPENMPI/UIPread.o' failed
make: *** [Make/linux64NvccDPOptSYSTEMOPENMPI/UIPread.o] Error 1
Trying to run a case with ~6 Million cells, the output of simpleFoam stopped after the header but was still kind of running in the background (looking at the nvidia-smi output).
After a while I stopped the solver, because I was not expecting anything to happen anymore... (running parallel on 8 CPUs with OF v1606 did not take that long)
Output:
Code:
/*---------------------------------------------------------------------------*\
| RapidCFD by simFlow (sim-flow.com)                                          |
\*---------------------------------------------------------------------------*/
Build  : dev-9fc614f4e816
Exec   : simpleFoam
Date   : Sep 12 2016
Time   : 19:52:30
Host   : "INVLEVPC121"
PID    : 4999
Case   : /home/NAME/RapidCFD/NAME-dev/run/TestCase
nProcs : 1
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster
allowSystemOperations : Allowing user-supplied system call operations

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
The solver was also only using very little of the GPU, as in aee's case:
Quote:
Even for this small case, one CPU core is maxed out, while nvidia-smi shows interfoam is using very little of the GPU. I may have to play with the case files to see what actually runs on the gpu.
Does anybody have an idea why this is happening?

If there are any questions about how I "compiled" RC - just ask
LisaMarie is offline   Reply With Quote

Old   September 14, 2016, 12:10
Default
  #26
New Member
 
Per Jørgensen
Join Date: Mar 2012
Posts: 17
Rep Power: 7
perjorgen is on a distinguished road
Thanks, I will try testing on a bigger problem :-)

Quote:
Originally Posted by wyldckat View Post
Quick answer: When it comes to testing with many cores/streams/threads, you need to think "big". For example, with traditional CPU based computing, the rule of thumb is roughly a minimum of 50000 cells per core/processor/subdomain.

I don't know how much RAM it equates to on a GPU, but the rule of thumb I use for CPUs is that 1 million cells (hexahedral-dominant mesh) requires 500 MB to 1 GB of RAM, depending on the solver and the number of equations.

Therefore, the default cavity and pitzDaily cases have 100 or 1000 cells... that would be like using a shotgun (the Titan) to kill an ant (the small tutorial cases) .

So, with a Titan GPU, you should aim to something fairly bigger than those small tutorial cases.
UPDATE: Now I tried usinng it on a 2.5M cell problem, and it appears to run at the same speed as a single threaded CPU :-(

Does anybody have a test case that gives a significant speed up?

Thanks,

Per

Last edited by perjorgen; September 17, 2016 at 09:43.
perjorgen is offline   Reply With Quote

Old   November 18, 2016, 13:22
Default
  #27
New Member
 
Martin Simon
Join Date: Feb 2015
Posts: 5
Rep Power: 5
martinsimon is on a distinguished road
Did You get the issue with solver not starting resolved?

I'm having the same trouble...

----

How do you start using solvers when RCFD has been installed?
Using sonicFoam produces only headers, but no real action.
I can't use blockMesh under RCFD either... :/
Thanks!

Last edited by wyldckat; November 20, 2016 at 14:10. Reason: merged posts done a few minutes apart
martinsimon is offline   Reply With Quote

Old   December 8, 2016, 06:10
Default swak4Foam for RapidCFD
  #28
Member
 
Johannes Martens
Join Date: Jun 2015
Posts: 47
Rep Power: 4
KingKraut is on a distinguished road
Hi all,
does anyone have any experience in using swak4Foam in RapidCFD?
Anyone managed to compile this yet?
Thanks for any answers!
Best regards
Johannes
KingKraut is offline   Reply With Quote

Old   December 9, 2016, 05:19
Default Again RapidCFD swak4Foam
  #29
Member
 
Johannes Martens
Join Date: Jun 2015
Posts: 47
Rep Power: 4
KingKraut is on a distinguished road
Edit: I have opened a new thread for this, since this is not only installation of RapidCFD. Thread

__________________________________________________ ____________________________________

Hi all,

as I said I am currently trying to compile swak4Foam with RapidCFD. The system I am using is Scientific Linux 6.7.

I get the following error, when running ./Allwmake from within the folder swak4Foam:
Quote:
make: *** No rule to make target `FieldValueExpressionParser.dep', needed by `Make/linux64NvccDPOpt/dependencies'. Stop.
Parser library did not compile OK. No sense continuing as everything else depends on it
I attached the full output of the command and the log from ./maintenanceScripts/compileRequirements.sh, too. Apart from that I explicitley compiled m4 version 1.4.17, as described here http://openfoamwiki.net/index.php/In...g_dependencies.
So the settings seem to be alright, I think.

I googled the error, but all I could find, was the instructions I already tried...
I am a little stuck here right now, but to me the error seems to be not caused by the combination with RapidCFD but rather some options I just did not yet set right in swak4Foam.

Can anybody point me into the right direction? Any idea, which way I could go from here?

Thanks a lot for looking into this!!

Best regards
Johannes
Attached Files
File Type: txt log_compileRequirements.txt (72.3 KB, 3 views)
File Type: txt log_Allwmake.txt (4.7 KB, 2 views)
KingKraut is offline   Reply With Quote

Old   July 22, 2017, 12:23
Default
  #30
Senior Member
 
Hisham's Avatar
 
Hisham Elsafti
Join Date: Apr 2011
Location: Braunschweig, Germany
Posts: 253
Blog Entries: 10
Rep Power: 10
Hisham is on a distinguished road
Dear all,

I am trying to install RapidCFD but encountered some problems and thought I could find help here!

System: Ubuntu 16.04,
nvcc: NVIDIA (R) Cuda compiler driver
Cuda compilation tools, release 8.0, V8.0.61

I have GeForce GT 430 (just for testing purposes before going to a larger machine) with Nvidia drivers.

System MPI: Open MPI 1.10.2
gcc (Ubuntu 5.4.0-6ubuntu1~16.04.4) 5.4.0 20160609

I have all prerequisites of OF 2.3.1.

When I start the Allwmake script, some libraries get compiled and others don't, complaining about MPI. Although this message can be found in the output:

Code:
Note: ignore spurious warnings about missing mpicxx.h headers
I stopped the compilation process because most applications are not linking to the non compiled libraries. I herewith attach the log file. I appreciate any help for compiling on Ubuntu 16.04. Also, I would appreciate guidance to any available manual or instructions.

Thanks in advance!

Hisham
Attached Files
File Type: zip log.wmake.zip (38.9 KB, 3 views)
martinsimon likes this.
Hisham is offline   Reply With Quote

Old   August 25, 2017, 03:10
Default
  #31
New Member
 
Ashish Magar
Join Date: Jul 2016
Location: Mumbai, India
Posts: 24
Rep Power: 3
ashishmagar600 is on a distinguished road
Greetings everyone.

I am encountering same problems as @Hisham.

Code:
1 error detected in the compilation of "/tmp/tmpxft_00002286_00000000-5_sonicFoam.cpp4.ii".
sonicFoam.dep:3: recipe for target 'Make/linux64NvccDPOpt/sonicFoam.o' failed
make[1]: *** [Make/linux64NvccDPOpt/sonicFoam.o] Error 2
make[1]: Target '/home/magar/OpenFOAM/RapidCFD-dev/platforms/linux64NvccDPOpt/bin/sonicFoam' not remade because of errors.
make[1]: Leaving directory '/home/magar/OpenFOAM/RapidCFD-dev/applications/solvers/compressible/sonicFoam'
Most of the solvers are not compiled.

RapdiCFD was tested and used for Cuda versions 6.5 and 7.5. I don't know if it has something to do with our cuda version.



@martinsimon seems to have compiled RapidCFD but errors as:
Quote:
Using sonicFoam produces only headers, but no real action.
I can't use blockMesh under RCFD either... :/
would leave us helpless, thus switching to basic OF (without cuda).

@LisaMarie was able to compile past recently.
Quote:
I was finally able to kind of compile RapidCFD. (foamInstallationTest failed)
Even though I had two error messages remaining, I could call solvers like icoFoam.

If there are any questions about how I "compiled" RC - just ask
Yes. It would be very kind if you could help us.


I request Members and Moderators if they could help us because newer versions seem to have some disagreement with RapidCFD.


Thanks for any help.
martinsimon likes this.
ashishmagar600 is offline   Reply With Quote

Old   October 5, 2017, 16:53
Default [Solved]
  #32
New Member
 
Ashish Magar
Join Date: Jul 2016
Location: Mumbai, India
Posts: 24
Rep Power: 3
ashishmagar600 is on a distinguished road
Hello Everyone, glad to be back here.

So I contacted the developer of RC and we managed to come up with a new commit at my local branch on git-hub.

Tracking errors of not compiling the solvers were traced to uncompiled sources of reading and formatting stl files. This was due to the updated version on flex on Linux distros.

Please do the following changes in :

$Home/RapidCFD/RapidCFD-dev/src/triSurface/triSurface/interfaces/STL/readSTLASCII.L#L58
$Home/RapidCFD/RapidCFD-dev/src/surfMesh/surfaceFormats/stl/STLsurfaceFormatASCII.L#L53

Code:
 
from
#if YY_FLEX_SUBMINOR_VERSION < 34
to
#if YY_FLEX_MINOR_VERSION < 6 && YY_FLEX_SUBMINOR_VERSION < 34
Thanks to Daniel Jasiński (daniel-jasinski; on git-hub) for all the help.

Cheers.
ashishmagar600 is offline   Reply With Quote

Old   November 29, 2017, 11:40
Default
  #33
New Member
 
Jurado
Join Date: Nov 2017
Posts: 10
Rep Power: 2
Jurado is on a distinguished road
Hello everyone,

I am trying to install RapidCFD on ubuntu 16.04 with cuda9-0. However i am encountering an error that I can't manage to solve.

I have attached the compile log.

If someone could help me it would be really appreciated.

Thanks.
Attached Files
File Type: txt log_Error_Compilation_RapidCFD.txt (80.0 KB, 9 views)
Jurado is offline   Reply With Quote

Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are On
Pingbacks are On
Refbacks are On



All times are GMT -4. The time now is 01:57.