Fix pushed to repository
Hi all,
I fixed the issues mentioned in the posts above (Allwake logic, messed up header sections), tested and pushed to the repository (commit number: 55a148542d). To get the fix, run "git pull", clean and recompile. Best regards, Dominik |
Quote:
Sounds good, i will give it try. Thx again for your hard work. Br Christian |
Hi everyone!
I am also trying to use cuda with the ext3.0. I have a working installation of cuda (in opt/cuda), and installed 3.0. However when I try to run the Allwmake.firstInstall script, I get the error described by Dominik : Quote:
# System installed CUDA export CUDA_SYSTEM=1 export CUDA_DIR=/opt/cuda export CUDA_BIN_DIR=$CUDA_DIR/bin export CUDA_LIB_DIR=$CUDA_DIR/lib export CUDA_INCLUDE_DIR=$CUDA_DIR/include export CUDA_ARCH=sm_20 I used the CUDA_ARCH setting present in etc/presh.sh-EXAMPLE. The device I currently use is an old Quadro FX 570. Best regards, Pierre |
Hi Pierre,
the message pops up only if CUDA_ARCH is not set. However, you say that is specified in prefs.sh . Can you please verify that prefs.sh is actually setting the variable? Steps: 1) Source the bashrc: . etc/bashrc (when you are in installation root directory, e.g. ~/foam/foam-extend-3.0/) 2) Check whether CUDA_ARCH is set: env | grep CUDA* Please post the output of step 2) Best regards, Dominik |
Dear Dominik,
I'm very sorry for the late answer. It appeared that I had an issue with my swak4foam installation. After reinstalling it I manage to run the Allwmake.firstInstall script without getting the previous error. However I still cannot compile the cuda solver. When I run the Allmake I get the following result : Code:
Found nvcc -- enabling CUDA support. Code:
CUDA_BIN_DIR=/opt/cuda/bin Pierre |
Hi everyone,
I am also trying to use the cudaSolver of foam-extend-3.0. I am quite new to openfoam, so I set WM_COMPILE_OPTION:=Debug in the foam-extend-3.0etc/bashrc because I may need to debug to understand the code better, including the cudaSolver. I got some errors related to the nvcc options during compilation, as follows: Code:
+ cd cudaSolvers I have in my etc/prefs.sh : export CUDA_SYSTEM=1 export CUDA_DIR=/home/zzhong/cuda_4.2 export CUDA_BIN_DIR=$CUDA_DIR/bin export CUDA_LIB_DIR=$CUDA_DIR/lib64 export CUDA_INCLUDE_DIR=$CUDA_DIR/include export CUDA_ARCH=sm_20 The GPU attached to the node is Tesla C2050. There are no errors when compiling with WM_COMPILE_OPTION:=Opt, and the libcudaSolvers.so is generated into lib/linux64GccDPOpt. Best regards Zhong |
Dear Zhong,
Does the compilation work for you with Opt instead of Debug? Best regards, Dominik |
Hi Dominik,
Yes, the compilation with Opt went well without errors. Best regards Zhong |
Dear Zhong,
now that you have the cuda libraries working, do you find any advantage running the cases with cuda? Can you please tell us about your experience? Thank you. F.F. |
Hi Federico,
To be honest, I haven't done any serious test yet. Sorry for that. I will let you know once I have some results. Actually I am currently still experimenting with cufflink library rather than the cudaSolver included in the foam-extend-3.0, because at this moment I am interested in running the GPU solver in parallel, but I am not sure if the cudaSolver in foam-extend-3.0 can run in parallel. I am trying to figure out what domain decomposition method is used and how it is implemented in foam, obviously some information (interfaces?) are exchanged between sub-domains during each iteration of the linear solver. Any ideas? Best Regards Zhong |
Quote:
I had installed everything successfully. but when I try to run a case I am getting a problem. Quote:
many thanks best mahdi |
Hi Mahdi,
This looks like an error when you have not installed the Nvidia drivers properly. I have no experience in installing Cuda. Check if you have installed nvidia cuda drivers properly. http://forums.udacity.com/questions/...ror-in-maincpp Cheers |
Quote:
Dear Krishna I had solved that. problem was related to gcc version when compiling cuda solvers. I was using gcc 4.6 for installing cuda driver, but i didn't switch afterwards to gcc 4.8. however I have another question? My graphic card is gforce 8400 Gs. is there any reason that I was not able to use another cuda arch like sm_20? just sm_10 worked in my case. and Im wondering that is it possible to select some how the number of the gpus or run in parallel with gpus? since i guess foam extend is using cufftlink-library it should be possible, but i was not able to test even the cufftlink-library sepratly to test it as I guess my ubuntu and nvidia driver are more recent than the one it was tested for. best mahdi |
Hi,
I have not used multi GPU option, currently I am using only CPU. I have google and found this, hope it should help. https://code.google.com/p/cufflink-l...ingStartedPage Cheers |
Hi All
After a long time i am now able to run fe31 with cuda. Now its really easy. Install cuda and put the cusp folder into the cuda/include folder. Setup the cuda paths in of. Done. Should we start a thread "How to use cuda?" Thx to all Christian |
Dear colleagues,
did anyone succeed in with compilation of cudaSolvers for foam-extend-3.1? During compilation I get that there's no rule for 'cudaCG/cgDiag.dep'. Indeed, there's no such file, there's only cgDiag.cu. CUDA variables in prefs.sh are set properly. Best regards, Aleksey. |
I've found out how to compile cudaSolvers. It appeared that there's no nvcc rules for gcc46. nvcc rules exist only for gcc. So I've compiled foam by system compiler.
1. I've downloaded cusplibrary and modified options file as it was mentioned before in this thread. 2. In all source files of cudaSolvers I've changed "#include <cusp/blas.h>" on "#include <cusp/blas/blas.h>" 3. I changed path for cuda libs from lib to lib64 folder in prefs.sh After that everything compiled OK. Best regards, Aleksey. |
Hello, Aleksey_R,
Can you teach me how can I compile the foam-extend by system compiler? I follow yours 1,2,3 steps,but I still cannot figure out what the problem is. when I use"wmake libso" to compile the cudaSolver, I got some mistake,it shows: /usr/local/cuda/include/cusp/system/cuda/detail/multiply/coo_flat_spmv.h:40: error: ’threadIdx’€™ was not declared in this scope /usr/local/cuda/include/cusp/system/cuda/detail/graph/b40c/graph/bfs/enactor_hybrid.cuh:496: error: expected primary-expression before ‘>’ token ... desire for your reply. Best regards, zhihongliu. |
Hello!
What version of CUDA do you use? I had successful experience only with v. 6.0. |
Hi Everyone,
I successfully installed foam-ext on my Archlinux distro. I also installed the nvidia driver and Cuda (test samples work). All cuda variables in the prefs.sh file are correctly set. Cusp libraries seem to be well installed in the include cuda directory. It seems that the only remaining thing to do to install cudaSolvers is running ./Allwmake in the cudaSolvers directory. But here is the error message : nvcc fatal : redefinition of argument 'machine' nvcc fatal : redefinition of argument 'machine' nvcc fatal : redefinition of argument 'machine' Does anybody get this error message at compilation of cudaSolvers ? Regards jipai |
All times are GMT -4. The time now is 01:07. |