
[Sponsors] 
November 4, 2019, 05:36 
foamextend4.1 release

#1 
Senior Member
Hrvoje Jasak
Join Date: Mar 2009
Location: London, England
Posts: 1,905
Rep Power: 33 
foamextend4.1 release The new version of foamextend has been released following extensive development and testing and is available for download: https://sourceforge.net/projects/foa...amextend4.1/ The foamextend project is a fork of the OpenFOAM® open source library for Computational Fluid Dynamics (CFD). It is an open project welcoming and integrating contributions from all users and developers. In total, the release consists of 1450 commits since the last release Some major development features: Blockcoupled pressure velocity solver for steady and transient simulations of incompressible turbulent fluid flow. Fully implicit handling of porosity and MRF in blockcoupled solvers This is the final release of complete functionality for pressurebased implicit blockcoupled solvers steady and transient incompressible turbulent flows. The pressure and momentum equation are solver in a single linear block with 4x4 coefficients, with full implicit support for multiple frames of reference and porosity. The linear system is solved using the blockcoupled AMG solver (see below). The code supports implicit interfaces without transformation such as GGI. For fully implicit treatment of symmetry plane boundaries, please use the blockSymmPlane boundary condition on velocity. Consistency of p and U boundary conditions is necessary. The code does not support indeterminate form of pressure matrices, meaning that zero gradient boundary condition on the complete periphery of the domain is not allowed. At least a single boundary pressure reference point is required. Consistent treatment of inletOutlet velocity boundary condition requires the equivalent pressure boundary condition to be specified as outletInlet. The speedup compared to the segregated solution comes from significant change in relaxation factors. A typical relaxation factor on velocity is 0.95; for trivial meshes, academic problems and appropriate choice of convection discretisation the solver can also operate without relaxation on U, but for industrial cases this is not recommended. Typical relaxation factors on turbulence variables is 0.9 or 0.95, depending on the complexity of the case. Further improvement may be achieved using blockcoupled turbulence models (see below). Significant effect on coupled solver performance is achieved using appropriate linear algebra settings. It is recommended to use the blockAMG solver with the blockSAMG coarsening and ILUC0 smoothers. Expected performance of the coupled solver compared to the segregated solver for the same steady state case is a factor of 3 in execution time, at a cost of using triple the amount of RAM memory, due to the storage of the coupled matrix. Parallel scaling of the coupled solver on a large number of processors is significantly better than the equivalent segregated solver, as the number of MPI messages is reduced, with messages of larger size. The solver is tested to the levels of hundred of millions of cells and thousands of cores. In transient simulations, the coupled solver gives advantage over the segregated solver because of its accuracy and because pU coupling is not dependent on the timestep size or maximum CFL number in the domain. It is recommended for use in large LEStype simulations, where there is a significant difference between the mean and max CFL number. Outer iterations in the transient solver can be enabled but are typically not necessary. For details of the coupled solver and AMG solver technology we recommend the following references: Uroić, T., Jasak, H.: Blockselective algebraic multigrid for implicitly coupled pressurevelocity system, Computers&Fluids, 2018 Beckstein, P., Galindo, V., Vukčević, V.: Efficient solution of 3D electromagnetic eddycurrent problems within the finite volume framework of OpenFOAM, Journal of Computational Physics, Volume 344, 1 September 2017, Pages 623646 T Uroić, H Jasak, H Rusche: Implicitly coupled pressure–velocity solver OpenFOAM: Proceedings of the 11th Workshop, Springer, 249267 Fernandes, C., Vukcevic, V., Uroic, T., Simoes, R., Carneiro, O.S., Jasak, H., Nobrega, J.M.: A coupled finite volume flow solver for the solution of incompressible viscoelastic flows, Journal of NonNewtonian Fluid Mechanics, 2019 Immersed Boundary Surface Method. Support for turbulence, dynamic immersed boundary and adaptive polyhedral refinement on immersed boundary meshes The new formulation of the Immersed Boundary Method (IBM) is a complete methodology rewrite of the work implemented in foamextend3.2 and 4.0. It was shown that the principle of nearbody interpolation is not sufficiently powerful for the flexibility and accuracy required for practical engineering simulation. On suggestion of dr. Tukovic, the new method performs the actually cutting of the background mesh with the immersed boundary surfaces, modifying internal cells and faces and creating new intersection faces. The Immersed Boundary (IB) faces exist in their own patch and are not present in the face list belonging to the polyMesh. Representation of IB in the background mesh is achieved by using the intersection faces of the surface mesh and cells on the background mesh. The resolution of the original surface mesh does not influence the accuracy of the IBM: this is only influenced by the background mesh. For cases of "unclean intersection", such as the surface mesh coinciding with the points or faces of the polyMesh, the error mitigation algorithm is implemented: the Marooney Maneouvre. This will ensure that the cut cell is geometrically closed (sum of face area vectors for the cell equals zero vector) under all circumstances. The limiting factor of the IBM is the fact that a single background cell mesh can only be cut once. The limitation is mitigated by the use of adaptive mesh refinement, based on the distance to the IB surface, which is provided as a part of the package. The background mesh for the IBM calculation can be of arbitrary type: polyhedral cells are fully supported. The IBM can be combined with other complex mesh operations and interfaces: moving deforming mesh, topological changes and overset mesh. Postprocessing of the immersed patch data is performed separately from the main mesh. Individual VTK files are written for each field in the time directory, due to the limitations of the current VTK export format. The method is enabled to support moving deforming immersed surface, optionally operating on a moving deforming mesh. IBM implementation operates correctly in parallel on an arbitrary mesh decomposition. Interaction of IBM and processor boundaries is fully supported. For static mesh simulations, regular static mesh boundary conditions may be used on IBM patches; however, the surface data for IBM patches will not be exported for postprocessing. To achieve this, IBMspecific boundary conditions may be used. IBM does not carry execution overhead compared to the bodyfitted mesh on static mesh cases, beyond the calculation of original IBM intersection. For dynamic mesh simulations, IBMspecific boundary conditions need to be used in order to handle the interaction of a moving deforming IBM and the background mesh, where the number of intersected cells changes during the simulation. The best reference for the Immersed Boundary methodology currebly publicly available is: Robert Anderluh: Validation of the Immersed Boundary Surface Method in Computational Fluid Dynamics, Master Thesis, Faculty of Mechanical Engineering and Naval Architecture, University of Zagreb, 2019 http://cfd.fsb.hr/wpcontent/uploads...uhMSc_2019.pdf Further publications are under way. Overset Mesh Method. New automatic overset mesh fringe calculation algorithms. Further development of the native implementation of overset mesh includes work on automatic fringe detection and fringe minimisation. Parallel fringe search algorithm and interprocessor fringe communication have been improved. Polyhedral adaptive mesh refinement and coarsening, working on all cell types, in 2D and 3D. A new adaptive mesh refinement and coarsening algorithm has been developed and deployed. The algorithm operates on arbitrary polyhedral meshes, offering refinement and coarsening of polyhedral cells. On hexahedral cell types, refinement is equivalent to 2x2x2 splitting of a hexahedron, while on polyhedra the algorithm regularises the mesh towards hex types. Mesh coarsening has been executed based on reassembling the cells from previously refined clusters. pointLevel and cellLevel fields are no longer needed as a readinput and can be recreated from existing mesh structure. This allows coarsening of initial locally consistent refined meshes as received from external meshing tools. In 2D simulations, the adaptive mesh refinement and coarsening algorithm will correctly recognise the planar/wedge conditions and only operate in live directions. Dynamic load balancing for parallel topologically changing meshes A native implementation of dynamic load balancing is implemented as a lowlevel function of a topologically changing mesh. Load balancing as a function of a topoChangerFvMesh virtual base class, making it available for all types of topological changes (or as a response to external load imbalance for a static mesh). Implementation uses the tools developed for parallel decomposition/reconstruction with changes needed for Pstream communication. Balancing action is executed as a global decomposition, assemble of ab migrated meshes (using decomposition tools), migration via Pstream communication and reassembly at target processor (using reconstruction tools). Field data follows the same path, migrating with relevant mesh data. Load balancing is typically used with adaptive mesh refinement and is thoroughly tested for large parallel decompositions. Cases of "zero cell at processor" are fully supported; this allows the load balancing tool to be used for initial decomposition or reconstruction., which no longer relies to point/face/cellProcAddressing fields. Linear solver and block linear solver improvements In the search for significant performance improvements on meshes with coupled interfaces and largescale HPC, significant work has been done on linear algebra. On preconditioners, Crouttype ILU preconditioners are implemented. For meshes where there is direct contact between faceneighbours of a cell (virtually all mesh structures, apart fullhex meshes), the diagonal based ILU preconditioning is incorrect, with consequences on solver performance. To replace this, Crouttype preconditioners and smoothers are implemented both for the segregated and blockcoupled solvers. Variablelevel fillin ILUCp and zero fillin ILUC0 preconditioners are implemented, with several variants of preconditioning across processor boundaries. Performance testing of processoraware ILUtype preconditioners is likely to continue for some time. On linear solver methodology, major work has been done to improve the performance of the linear algebra package where a number of matrix rows (cells) is excluded from the simulations, such as immersed boundary and overset meshes. In particular, zerogrouphandling in AMG coarsening is implemented. New agglomeration algorithms resulting from the work at Uni Zagreb have been implemented, including a smart cell clustering algorithm and a generalisation of the Selective AMG work by Stuben et al. Here, a coarse level of multigrid is created by equation selection (as opposed to agglomeration), based on priority criteria of equation influences. The algorithms have been generalised on nonM, nonsymmetric and nondiagonally dominant matrices. Parallel handling of coarse level selective AMG interfaces, splitting the triple matrix product coarse level assembly operations to relevant processors has been implemented. The selective AMG (incredibly) shows theoretical convergence properties of 1orderofmagnitude residual reaction per Vcycle (theoretically, Wcycle) even on industrial grade meshes. The blockcoupled solver implements both the equation clustering and equation selection operations on a blockmatrix system using the appropriate norm of a blockcoupled coefficient. The algorithms mirror the scalar version, and show remarkable convergence characteristics for a block system without diagonal dominance, such as implicitly coupled Up block matrix. Again, theoretical convergence behaviour is indicated on industrial strength meshes. For further information see: Tessa Uroic: Implicitly Coupled Finite Volume Algorithms, PhD Thesis, Faculty of Mechanical Engineering and Naval Architecture, University of Zagreb, 2019 Major performance improvement for parallel overset and GGI interfaces Performance improvement for GGI and related interfaces (partial overlap, mixing plane) in parallel execution has been implemented by distributing the work on all available processors. Consistent SIMPLE and PISO segregated algorithms, where the solution is independent of timestep size or relaxation parameters The final and validated version of the consistent handling of relaxation and timestepping within the SIMPLEPISO family of algorithms has been deployed. The code has been validated and shown to remove relaxation and timestep dependent artefacts in steady and transient solutions New formulation of buoyant Boussinesq approximation solver Alternative formulation of the steady Boussinesq approximation solver for buoyant flows has been released, following community comments on the lack of accuracy and instability of the original formulation Incremental development of the Finite Area Method and liquid film solver All your contributions are highly welcome: New solvers, utilities and models; bug fixes; documentation. The many ways of contributing and the contribution process are described in detail at: http://sourceforge.net/p/foamextend...wToContribute/ Hrvoje Jasak
__________________
Hrvoje Jasak Providing commercial FOAM/OpenFOAM and CFD Consulting: http://wikki.co.uk 

November 6, 2019, 18:12 

#2 
Senior Member
Kyle Mooney
Join Date: Jul 2009
Location: San Francisco, CA USA
Posts: 323
Rep Power: 17 
Not sure if its a temporary issue on the side of the host but the parmetis Package URL appears to be down.


November 21, 2019, 03:38 

#3 
Member
Torsten Schenkel
Join Date: Jan 2014
Posts: 69
Rep Power: 12 
Brilliant. Thanks for all the effort that went in this, and the quick bug fix to the MPI/GAMG issue.
I am planning to roll this ouit to students and was wondering if the ubuntu1804 deb package will be updated on a regular basis. If not, I'm happy to compile and package a tgz for the students myself, but the debian installation would make things a lot easier. So if there is going to be an updated deb soon, I may delay the rollout. Thanks again. T 

January 12, 2020, 18:01 
bad cell cut using IM

#4 
New Member
Lin Xiangfeng
Join Date: Dec 2016
Posts: 11
Rep Power: 9 
Hi,
I am using the latest released OF now and testing its capability of immersed boundary method. However, 'bad cell cut' warning occurs when running the case. Does it matter or what can I do to avoid it? Thanks you all. regards. Code:
Bad cell cut: volume = (1.03008 0.0341328) = 1.06421 Bad cell cut: volume = (1.08702 0.016779) = 1.10379 Bad cell cut: volume = (1.04602 0.0287138) = 1.07474 Bad cell cut: volume = (1.0217 0.0324708) = 1.05417 Bad cell cut: volume = (1.01828 0.0369687) = 1.05525 Bad cell cut: volume = (1.01267 0.000227666) = 1.01289 Bad cell cut: volume = (1.03638 0.0149395) = 1.05132 Immersed boundary blades info: nIbCells: 8325 nDeadCells: 1999 nIbFaces: 15201 nDeadFaces: 10114 

January 14, 2020, 05:41 

#5 
Senior Member
Hrvoje Jasak
Join Date: Mar 2009
Location: London, England
Posts: 1,905
Rep Power: 33 
Hi,
This is immersed boundary. The rule says that the cell can only be cut ONCE by an immersed boundary patch. Therefore, either you have 2 IB patches really close to each other  cutting the same cell  or you have a lot of details on the IB surface that cannoe be captured on the background mesh. The IBM algorithm will work correctly, but this indicates a mismatch between the background grid and IB surface. You can do the following:  look at the surface to see what causes the error  use uniform or adaptive refinement to put more cells into the problem region. There are tools like refineImmersedBoundaryMesh to do this  have a look at your STL to make sure it is reasonably clean. In conclusion: safe to proceed, but with care. Hrv
__________________
Hrvoje Jasak Providing commercial FOAM/OpenFOAM and CFD Consulting: http://wikki.co.uk 

January 14, 2020, 17:00 

#6  
New Member
Lin Xiangfeng
Join Date: Dec 2016
Posts: 11
Rep Power: 9 
Quote:
Thank you so much. It is so glad to receive your reply. You have done a great job in the development of OF. I will look into my case following your advice. Thanks again. Regards, Xiangfeng 

January 17, 2020, 05:28 

#7 
New Member
Mattia
Join Date: May 2018
Location: Novara  Italy
Posts: 29
Rep Power: 7 
Hello Hrvoje,
is there any documentation about overset workflow in foamextend4.1? I think your overset implementation is different from OpenCFD's one, am I correct? Is there any interpolation schemas description and oversetMeshDict syntax docs? Bye Mattia 

February 28, 2020, 10:53 

#8 
New Member
Join Date: Aug 2019
Posts: 4
Rep Power: 6 
Hello!
I recently made the switch to foamextend to try out the blockcoupled pU solver. I am now trying to run a case that contains both a MRF zone and a porous zone and was wondering how to go about this in foamextend. Is there a way to add a MRF or porous source to simpleFoam or pUCoupledFoam via fvOptions as you can do in OpenFOAM? Also, how would you go about running a transient simulation using the blockcouple pU solver? Finally, I was wondering if 'cyclicAMI' can be used in foamextend or if ggi/cyclicGgi should be used instead as in the MRFSimpleFoam tutorials? Thanks to all in advance! Kind regards, Sergio 

March 1, 2020, 08:44 

#9 
Senior Member
Hrvoje Jasak
Join Date: Mar 2009
Location: London, England
Posts: 1,905
Rep Power: 33 
Hi,
You have he MRFPorousFoam solver that does all that. Hrv
__________________
Hrvoje Jasak Providing commercial FOAM/OpenFOAM and CFD Consulting: http://wikki.co.uk 

March 5, 2020, 09:22 

#10 
New Member
Join Date: Aug 2019
Posts: 4
Rep Power: 6 
Thank you very much for the response!
I had a go at using MRFPorousFoam but ran into issues at the ggi/cyclicGgi interfaces: For example, the images below are a capture of the axialTurbine_ggi case from $FOAM_TUTORIALS/incompressible/MRFSimpleFoam/, ran using MRFSimpleFoam and MRFPorousFoam, … and a snippet of the log output showing continuity error and increased flux through the interfaces. The only changes made to the MRFPorousFoam setup were the added Up solver, blockSolver, fieldBounds and reduced underrelaxation in the fvSolution file, the coupled turbulence model and the div(U) term in fvSchemes. I ran the case using various settings for linear solvers/turbulence model/ boundary conditions/schemes, all without much effect. My own test case consisting of a propeller blade at hover inside a (nonconformal) MRF zone shows the difference between my coupled and segregated solver runs a little clearer: (It looks as is if the airflow is being reversed..) I was wondering if you are familiar with this behaviour and if so, what changes to the setup or mesh you think would allow me to run the tutorial case using the coupled solver? The only other post I could find mentioning issues with the ggi interface is from a thread several years ago: pUCoupledFoam with Multiple Reference Frames (MRF). Thank you again! Sergio 

March 5, 2020, 10:24 

#11 
New Member
Join Date: Aug 2019
Posts: 4
Rep Power: 6 
Thank you very much for the response!
I had a go at using MRFPorousFoam but ran into issues when using ggi/cyclicGgi interfaces. For example, in the attachments image 1 shows a screen capture of my runs of the axialTurbine_ggi tutorial case (from $FOAM_TUTORIALS/incompressible/MRFSimpleFoam/) using MRFSimpleFoam and MRFPorousFoam. Image 2 is a snippet of the log output showing continuity error and increased flux through the interfaces with the coupled solver. The only changes made to the MRFPorousFoam setup were the added Up solver, blockSolver, fieldBounds and reduced underrelaxation in the fvSolution file, the added div(U) term in fvSchemes and the coupled turbulence model. I also tried using various settings for boundary conditions/RAS properties/linear solver settings and schemes, all without much effect. My own test case (images 3 and 4), consisting of a propeller blade inside a (nonconformal) MRF zone shows the difference between my coupled and segregated solver runs a little clearer (it looks as if the airflow is being reversed..). I was wondering if you are familiar with this behaviour and if so, whether you have any suggestions on how to adapt the tutorial case to make it run successfully using MRFPorousFoam? The only other post I could find mentioning issues with ggi in this context is from a thread from several years ago: pUCoupledFoam with Multiple Reference Frames (MRF). Thanks again for your help! All the best, Sergio Last edited by sai193; March 6, 2020 at 04:30. 

April 20, 2020, 10:59 

#12 
New Member
ZhaoJia
Join Date: Nov 2017
Posts: 8
Rep Power: 8 
Hi, prof Jasak:
I'am using the immersed boundary method in openfoamextend 4.1 to calculate the flow field of a NACA0012 foil. But I got some errors ,such as: 1. External flow Immersed boundary ibNACA info: nIbCells: 358 nDeadCells: 739 nIbFaces: 223 nDeadFaces: 3144 > FOAM Warning : From function void Foam::immersedBoundaryPolyPatch::calcCorrectedGeom etry() const in file immersedBoundaryPolyPatch/immersedBoundaryPolyPatch.C at line 1381 Minimum IB face area for patch ibNACA: 0. Possible cutting error. Review immersed boundary tolerances. Reading field U //// 2. Calculating divSf Face areas divergence (min, max, average): (0 2e05 3.44861e10) > FOAM Warning : From function writeIbMasks in file writeIbMasks.C at line 166 Possible problem with immersed boundary face area vectors: 2e05 Open cell 216149: 1.41421e05 gamma: 1 Open cell 216854: 1.41421e05 gamma: 1 Open cell 217551: 2e05 gamma: 1 Open cell 217552: 1.41421e05 gamma: 1 Open cell 218249: 1.41421e05 gamma: 1 Open cell 223199: 6.7868e07 gamma: 0.932174 Open cell 223208: 6.77995e07 gamma: 0.932218 Open cell 223216: 6.78226e07 gamma: 0.932194 Open cell 223225: 6.79081e07 gamma: 0.932106 Open cell 223230: 6.80099e07 gamma: 0.932024 Open cell 223234: 6.80846e07 gamma: 0.931924 Open cell 223253: 6.2886e07 gamma: 0.967803 Open cell 223255: 6.8835e07 gamma: 0.931207 Open cell 223263: 6.92602e07 gamma: 0.930771 Open cell 223277: 6.31157e07 gamma: 0.967062 Open cell 223295: 6.33853e07 gamma: 0.965989 Open cell 223300: 7.35851e07 gamma: 0.926549 Open cell 228749: 1.41421e05 gamma: 1 Open cell 229451: 2e05 gamma: 1 Open cell 229452: 1.41421e05 gamma: 1 Open cell 230154: 1.41421e05 gamma: 1 Open cell 230849: 1.41421e05 gamma: 1 I have tried to adjust the background grid and the STL grid. But it can't remove these errors. I was wondering if there are some relations between the background grid and the STL grid, and I hope you could give me some suggestions. regards. 

April 26, 2020, 04:30 
NACA4412 overset tutorial in foamextend 4.1 case fails

#13 
New Member
Johannes N Theron
Join Date: Feb 2010
Location: Hamburg
Posts: 25
Rep Power: 16 
I am having trouble running the parallel overset tutorials on foamextend 4.1
I installed it on two systems, both running Ubuntu 18, and get the same MPI error during the runApplication phase (segmentation faul on processor 3). There is an earlier error in the mergeMeshes procedure: > FOAM Warning : From function Foam::forces::forces(const word&, const objectRegistry&, const dictionary&, const bool) in file forces/forces.C at line 209 No fvMesh available, deactivating but that does not seem to terminate the run. Has anyone come across this, or have been able to run this tutorial successfully? Jan Theron 

May 31, 2020, 20:52 

#14 
New Member
Artem
Join Date: Apr 2014
Posts: 29
Rep Power: 12 
Dear Hrvoje,
Do I understand correctly that AMG linear solver does not work with a specific coupled matrix as one present in conjugateHeatFoam? Is there any possibility to use other linear solvers for the coupled matrix in the conjugateHeatFoam besides these: Code:
Valid asymmetric matrix solvers are : 3 ( BiCG BiCGStab smoothSolver ) Last edited by Kombinator; June 1, 2020 at 04:55. 

September 6, 2020, 07:53 
solver for nonisothermalviscoelasticmodel

#15  
Member
idrees khan
Join Date: Jun 2019
Posts: 36
Rep Power: 6 
Quote:
is there solver for nonisothermal viscoelastic models in this new release? 

November 11, 2020, 10:19 
Solver for nonisothermal viscoelastic models

#16 
Member
idrees khan
Join Date: Jun 2019
Posts: 36
Rep Power: 6 

November 20, 2020, 03:11 

#17 
New Member
damu
Join Date: Feb 2017
Posts: 24
Rep Power: 9 
Dear Prof. Jasak
I recently switched to foamextend4.1 after unsuccessful attempts with OF 5, 7 and 8 on a problem of flow over a heated cylinder(Re=100, Ri=1). Using OF 7 and 8, I tried with buoyantPimpleFoam but the lift coefficients always fell on the positive side instead of negative values. Later on I downgraded the version to OF 5 seeing various discussions on issues with Boussinesq approximation in OF 7 and OF 8. Here too, the solver returned me incorrect lift coefficients until I modified the pressure equation(based on suggestions from another researcher) to p = p_rgh + rhok*gh  gh(OF 5 using buoyantBoussinesqPimpleFoam) (1) instead of p = p_rgh + rhok*gh(default equation in pEqn.H) (2). To my surprise, the lift coefficients found to be reasonable. I understand the term gh in (1) corresponds to pressure due to weight of the fluid(please correct me if wrong). However, the vortex shedding frequency did not match with the available results. Also, the wake was kind of disorganised. It was then I came across the release of foamextend4.1 where you have endorsed the stability issues in the original formulation. I now have buoyantBoussinesqPisoFoam but the velocity magnitude after each time step keeps increasing. Please see attached my case file(https://drive.google.com/file/d/1t1S...ew?usp=sharing) and I would be grateful to receive your valuable suggestions. I also would like to know if any of the foamextend/OF users have encountered such issues. Thank you 

February 1, 2021, 23:16 
Dr.Zhang

#18 
New Member
Benjamin Zhang
Join Date: Aug 2019
Posts: 2
Rep Power: 0 
Dear Hrvoje,
Thanks a lot fot the pUCoupledFoam solver in foamextend 4.1. However, I noticed some interesting behavior for pUCoupledFoam. I am using a tutorial case "backwardFacingStepLaminar". I noticed that by running it with 1CPU and 4CPU, the final residual has a significant difference with 4CPU's residual several order of magnitude larger. Do you know why this is happening? Thanks, Xiaoliang 

February 9, 2021, 06:25 

#19 
Senior Member

@xiaoliang
The slowdown in convergence you observe is due to the fact that the domain decomposition renders the preconditioners used to solve the linear system less performant. This is well documentation in the literature on iterative solution methods (see ddm.org for instance). The slowdown in convergence you observe is not particular for the coupled solver, nor to OpenFoam. The slowdown in convergence can be "solved" by solver settings such that the relative residual criterium for the linear solver is met, despite of the max number of iterations imposed. This will ensure that SIMPLE number of iterations remains approximately these same at cost of making each iteration more expensive. There is no immediate solution to this. Multilevel decomposition methods do exist, but are not immediately available within OpenFoam and have their drawbacks. 

July 16, 2021, 05:02 
Help with conjugate of viscoelastic fluid using foam extend 4.1

#20 
New Member
Asanda
Join Date: May 2021
Posts: 7
Rep Power: 4 
Dear All,
May you kindly assist me, I need help with simulating conjugate heat transfer of viscoelastic fluids. Which solver can I use in foam extend 4.1 ? Furthermore, will I perhaps need to combine viscoelasticFluidFoam and chtMultiRegionFoam for my simulation and can these two solvers be merged? Your help will be much appreciated, thanks in advance 

Thread Tools  Search this Thread 
Display Modes  


Similar Threads  
Thread  Thread Starter  Forum  Replies  Last Post 
A problem with immersed boundary method running in parallel in foam extend 4.1  kinakamichibun  OpenFOAM Bugs  0  November 8, 2018 05:03 
probe Locations error in the dynamicMesh foam extend 4.0  vahidreza lotfi  OpenFOAM PostProcessing  2  August 22, 2018 10:30 
[blockMesh] Errors during blockMesh meshing  Madeleine P. Vincent  OpenFOAM Meshing & Mesh Conversion  51  May 30, 2016 10:51 
Problem with rhoSimpleFoam  matteo_gautero  OpenFOAM Running, Solving & CFD  0  February 28, 2008 06:51 
[blockMesh] Axisymmetrical mesh  Rasmus Gjesing (Gjesing)  OpenFOAM Meshing & Mesh Conversion  10  April 2, 2007 14:00 