Problems using setWaveField in parallel
Hi all,
I am wondering if it is possible to run setwavefield in parallel, because I get an error of patch missing in some processors. This of course is correct, as I can not control where the domain is divided to include part of all patches, in my case the boat. I do not understand why all patches should be present in all processors. Thanks for advice. Regards, Carlos. |
Hi Carlos,
I am running a case in parallel where I use the scotch decomposition method\ and setWaveField works normally. What is the error message exactly ? |
Hi Pablo,
Thanks for your reply. Are you running snappy in parallel or just waveFoam in parallel? The difference is in the first case (my situation) I want to run snappy in // to save time, but then the domain is decomposed before running setWaveField. There the error comes: Build : 2.3.1-262087cdf8db Exec : setWaveField -parallel Date : Mar 31 2015 Time : 15:44:20 Host : "CAE-1204" PID : 26807 Case : /home/carlos/Documents/Last-CFD/CabHlf_231_Lay nProcs : 4 Slaves : 3 ( "CAE-1204.26808" "CAE-1204.26809" "CAE-1204.26810" ) Pstream initialized with: floatTransfer : 0 nProcsSimpleSum : 0 commsType : nonBlocking polling iterations : 0 sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE). fileModificationChecking : Monitoring run-time modified files using timeStampMaster allowSystemOperations : Allowing user-supplied system call operations // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // Create time Create mesh for time = 0 // using new solver syntax: p_rgh { solver GAMG; tolerance 1e-07; relTol 0; smoother DIC; nPreSweeps 0; nPostSweeps 2; nFinestSweeps 2; cacheAgglomeration true; nCellsInCoarsestLevel 10; agglomerator faceAreaPair; mergeLevels 1; } // using new solver syntax: p_rghFinal { solver GAMG; tolerance 1e-08; relTol 0; smoother DIC; nPreSweeps 0; nPostSweeps 2; nFinestSweeps 2; cacheAgglomeration true; nCellsInCoarsestLevel 10; agglomerator faceAreaPair; mergeLevels 1; } // using new solver syntax: U { solver PBiCG; preconditioner DILU; tolerance 1e-09; relTol 0; } // using new solver syntax: UFinal { solver PBiCG; preconditioner DILU; tolerance 1e-09; relTol 0; } // using new solver syntax: gamma { solver PBiCG; preconditioner DILU; tolerance 1e-07; relTol 0; } Reading waveProperties Reading g Reading field alpha [2] [2] [2] --> FOAM FATAL IO ERROR: [1] [2] [1] [1] --> FOAM FATAL IO ERROR: [1] Cannot find patchField entry for hull [1] [1] file: /home/carlos/Documents/Last-CFD/CabHlf_231_Lay/processor1/0/alpha.water.boundaryField from line 26 to line 54. [1] [1] From function GeometricField<Type, PatchField, GeoMesh>::GeometricBoundaryField::readField(const DimensionedField<Type, GeoMesh>&, const dictionary&) [1] in file /home/carlos/OpenFOAM/OpenFOAM-2.3.1/src/OpenFOAM/lnInclude/GeometricBoundaryField.C at line 209. [1] FOAM parallel run exiting [1] Cannot find patchField entry for hull [2] [2] file: /home/carlos/Documents/Last-CFD/CabHlf_231_Lay/processor2/0/alpha.water.boundaryField from line -------------------------------------------------------------------------- MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD with errorcode 1. NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes. You may or may not see output from other processes, depending on exactly when Open MPI kills them. -------------------------------------------------------------------------- 26[3] [3] [3] --> FOAM FATAL IO ERROR: [3] Cannot find patchField entry for hull [3] [3] file: /home/carlos/Documents/Last-CFD/CabHlf_231_Lay/processor3/0/alpha.water.boundaryField from line 26 to line 49. [3] [3] From function GeometricField<Type, PatchField, GeoMesh>::GeometricBoundaryField::readField(const DimensionedField<Type, GeoMesh>&, const dictionary&) [3] in file /home/carlos/OpenFOAM/OpenFOAM-2.3.1/src/OpenFOAM/lnInclude/GeometricBoundaryField.C at line 209.[0] [3] FOAM parallel run exiting [3] [0] [0] --> FOAM FATAL IO ERROR: [0] Cannot find patchField entry for hull [0] [0] file: /home/carlos/Documents/Last-CFD/CabHlf_231_Lay/processor0/0/alpha.water.boundaryField from line 26 to line 49. [0] [0] From function GeometricField<Type, PatchField, GeoMesh>::GeometricBoundaryField::readField(const DimensionedField<Type, GeoMesh>&, const dictionary&) [0] in file /home/carlos/OpenFOAM/OpenFOAM-2.3.1/src/OpenFOAM/lnInclude/GeometricBoundaryField.C at line 209. [0] FOAM parallel run exiting [0] to line 54. [2] [2] From function GeometricField<Type, PatchField, GeoMesh>::GeometricBoundaryField::readField(const DimensionedField<Type, GeoMesh>&, const dictionary&) [2] in file /home/carlos/OpenFOAM/OpenFOAM-2.3.1/src/OpenFOAM/lnInclude/GeometricBoundaryField.C at line 209. [2] FOAM parallel run exiting [2] -------------------------------------------------------------------------- mpirun has exited due to process rank 1 with PID 26808 on node CAE-1204 exiting without calling "finalize". This may have caused other processes in the application to be terminated by signals sent by mpirun (as reported here). -------------------------------------------------------------------------- [CAE-1204:26806] 3 more processes have sent help message help-mpi-api.txt / mpi-abort [CAE-1204:26806] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages Thanks again for your attention. Carlos. |
Hi Carlos,
If you are missing boundaries on one or more processors I wonder whether you can execute e.g. SetFields or interFoam. This type of error seems to be pointing to a missing entry in the boundary file, and it should not be solved on a waves2Foam level. Kind regards, Niels |
Hi Niels, thanks for your reply.
The case runs perfectly in parallel, if I decompose the case after executing setWaveField. The missing patch is the boat, that is not present in all processors, the same as inlet is only in processor 1 and outlet is only present in processor 4. The point is why the boat is expected to be present in all processors. Perhaps my mistake is in the concept of aplying setWaveField to a decomposed case. I found a tread were sombody reconstructs the case after snappy, executes setWaveField and decompose the case again to process in parallel. Also, the DHTC case in OF tutorials runs snappy in serial and decomposes after setField. Does this mean that every body is running snappy in serial or is applying the mentioned method of decomposing twice? Carlos. |
What happens with the OF-tutorial, if you try to decompose and subsequently execute setFields?
Secondly, could you please try to upload the boundary files for all processors? To me, it still sounds as if snappy has a bug and that the loading of the fields in set(Wave)Fields results in an error due to the lack of these boundaries in the fields. Kind regards, Niels |
Quote:
As for my case, I did run a simple submerged plate I have used for viscous force measurement both in serial and parallel and attached the files you request: https://www.dropbox.com/sh/ka4vr959g...VqEyM43Oa?dl=0 I believe it is not possible to run setFields in parallel eider, but could not find a suitable example, will see if I have time to make a simple test case. Thanks and regards, Carlos. |
Ok, some more information.
If I run the damBreak4phase tutorial it runs perfectly and fields are produced in parallel. (This tutorial does not uses snappyHexMesh) The DTCHull gives a warning and fails to produce the fields. (This tutorial uses Snappy). The files are attached: https://www.dropbox.com/sh/ka4vr959g...VqEyM43Oa?dl=0 So Niels, I guess you are right and the problem is related to SnappyHexMesh. Regards, Carlos. |
Good morning all,
@Carlos: I have tested the waveFlume tutorial in a decomposed fashion, and I do not have any problems in using setWaveField, so I will consider this as being resolved on the waves2Foam side. I suggest that you report this as a bug through the bug-reporting systems. Kind regards, Niels |
Hi Niels,
Thank you, I have reported. Regards, Carlos. |
Hello folks
I have the issue with SetWaveField not finishing. I don't think it is entirely the same issue as posted here, although I am not sure as i cannot find the error anywhere. When i run blockMesh, I can easily run both setWaveParameters and setWaveField. When i run snappyhexmesh and obtain my actual model mesh, running setWaveParameters and subsequently setWaveFields, the setWaveFields stops at "Setting wave field ..." and never finish. I guess the issue is related to the mesh, however i cannot figure out what it is exactly as there are not errors in the terminal. /*---------------------------------------------------------------------------*\ | ========= | | | \\ / F ield | OpenFOAM: The Open Source CFD Toolbox | | \\ / O peration | Version: 2012 | | \\ / A nd | Website: www.openfoam.com | | \\/ M anipulation | | \*---------------------------------------------------------------------------*/ Build : 79e353b8-20201222 OPENFOAM=2012 Arch : "LSB;label=32;scalar=64" Exec : setWaveField Date : Sep 13 2021 Time : 09:56:32 Host : tony-VirtualBox PID : 11968 I/O : uncollated Case : /home/tony/OpenFOAM/tony-v2012/applications/utilities/waves2Foam/tutorials/Badebro/Badebro_Separat_3milOKMESH nProcs : 1 trapFpe: Floating point exception trapping enabled (FOAM_SIGFPE). fileModificationChecking : Monitoring run-time modified files using timeStampMaster (fileModificationSkew 5, maxFileModificationPolls 20) allowSystemOperations : Allowing user-supplied system call operations // * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * // Create time Create mesh for time = 0 // using new solver syntax: pcorr { solver GAMG; tolerance 1e-07; relTol 0; smoother DIC; nPreSweeps 0; nPostSweeps 2; nFinestSweeps 2; cacheAgglomeration true; nCellsInCoarsestLevel 10; agglomerator faceAreaPair; mergeLevels 1; } // using new solver syntax: pcorrFinal { solver GAMG; tolerance 1e-07; relTol 0; smoother DIC; nPreSweeps 0; nPostSweeps 2; nFinestSweeps 2; cacheAgglomeration true; nCellsInCoarsestLevel 10; agglomerator faceAreaPair; mergeLevels 1; } // using new solver syntax: p_rgh { solver GAMG; tolerance 1e-07; relTol 0; smoother DIC; nPreSweeps 0; nPostSweeps 2; nFinestSweeps 2; cacheAgglomeration true; nCellsInCoarsestLevel 10; agglomerator faceAreaPair; mergeLevels 1; } // using new solver syntax: p_rghFinal { solver GAMG; tolerance 1e-08; relTol 0; smoother DIC; nPreSweeps 0; nPostSweeps 2; nFinestSweeps 2; cacheAgglomeration true; nCellsInCoarsestLevel 10; agglomerator faceAreaPair; mergeLevels 1; } // using new solver syntax: U { solver PBiCG; preconditioner DILU; tolerance 1e-09; relTol 0; } // using new solver syntax: UFinal { solver PBiCG; preconditioner DILU; tolerance 1e-09; relTol 0; } // using new solver syntax: gamma { solver PBiCG; preconditioner DILU; tolerance 1e-07; relTol 0; } Reading g Reading waveProperties Reading waveProperties Reading field alpha Reading field U Reading field p Setting the wave field ... |
Hi Rasmus,
Have you found out why setWaveFields stops running (doing nothing) when using SHM? I'm having the same issue! Quote:
|
All times are GMT -4. The time now is 18:18. |