real T_mean; /* defined outside because will be used in multiple DEFINE macros */

DEFINE_ADJUST(adjust, domain, t)

{

real T_tot;

real u;

real counter = 0;

face_t f;

int ID = 20; /* outlet ID displayed in Fluent boundary conditions panel */

Thread *thread;

thread = Lookup_Thread(domain, ID);

begin_f_loop(f, thread)

{

u = F_U(f, thread); /* x velocity */

if (u >= 0) /* if fluid is going out... */

{

T_tot += F_T(f, thread);

counter = counter + 1;

}

}

end_f_loop(f, thread)

T_mean = T_tot/counter; /* arithmetic mean T of outflow */

}

DEFINE_PROFILE(T_backflow, thread, position)

{

face_t f;

begin_f_loop(f, thread)

{

F_PROFILE(f, thread, position) = T_mean;

}

end_f_loop(f, thread)

} ]]>

Quote:

This is completely wrong If this is true then they would have used the simple else-if command which was used in the previous version of SA and K-Omega model. Still you can observe in the theory guide of Fluent (I am talking about SA model) they are using the integration to wall approach for the Y+< 2 and wall function approach for the Y+>30 and strongly recommend to make the meshes either with Y+< 2 or Y+> 30 so that they can use the either IWT or wall function approach. Although the actual switch is implemented at the intersection of two profiles i.e. 11.225 (previously it was implemented at 11.06 in version 6.3). Now lets discuss the theory behind the enhanced wall treatment for K-epsilon and K-omega models. First question comes into mind why two approaches used for the same effect i.e. implementing the smooth transition between the log-law and viscous sub-layer implementation. This is because: 1. K-epsilon models were not designed for the near wall flow, therefore they require the damping functions to simulate the near wall effects. 2. K-omega based models were designed originally for the near wall region and therefore does not require the damping functions, hence the hybrid wall functions (blending of near wall and log law function) were implemented directly and same is true for SA model. You can find the details of latest work here for the k-omega and SA model with hybrid wall functions here http://num.math.uni-goettingen.de/ba...ings/knopp.pdf http://num.math.uni-goettingen.de/ba...ngs/alrutz.pdf But whether it is two layer approach (K-epsilon) or single model implementation approach (K-omega or SA model) the purpose is same i.e. to remove the short comings of the both models. i.e. the low reynolds number is valid for the Y+ < 0.2 (low reynolds number K-epsilon model) or Y+<2 (K-omega model, I am not writing low reynolds number k-omega becuase K-omega is originally a low Reynolds number model, so no need to define the Rose) and similarly the Y+ > 30 for high Reynolds number K-omega and K-Epsilon model. To be continued.... Now consider this http://en.wikipedia.org/wiki/Law_of_the_wall It is clearly written that U+ = Y+ for the Y+ < 5 (you can consider the sublayer up 11.225 but at the Y+ = 12 the error is around 25%) http://en.wikipedia.org/wiki/Law_of_the_wall Log law is for Y+> 30. Buffer zone is Y+ = 5 to Y+ = 30 This is problem area where both models (low Reynolds number and high Reynolds number ) don't work. This is the reason why the hybrid or enhanced wall treatment model was came into existence. Here is the some material from Fluent user guide: In other words the with Y+ ~ 1, you are solving the low reynolds number K-epsilon model of M. Wolfstein. "The Velocity and Temperature Distribution of One-Dimensional Flow with Turbulence Augmentation and Pressure Gradient. Int. J. Heat Mass Transfer, 12:301-318, 1969." Put in simple words: 1. With Y+~1 , you are solving the low Reynolds number K-epsilon model 2. In original form Wolfstein model is not applicable for the Y+ > 0.2 3. So to over come this we have to use the hybrid wall functions. 4. Enhanced wall treatment is method to implement the hybrid (or enhanced) wall functions for the varying Y+ in the CFD model. 5. Enhanced wall treatment is not needed to implement hybrid (enhanced) wall functions in k-omega model because they are already applicable up-to viscous sub-layer. Now the question is how does the enhanced (hybrid) wall function work. They work like Uplus = (1-blending function) * Uplus (of viscous sub-layer) + blending function * Uplus (of log law) Blending function = 0 for y+ < 6 Blending function =~ 1 for Y+ > 30-40 So for Y+< 6 you have viscous sub layer and you are using the low Reynolds number model for Y+> 30 You are using the log law implement ion (aka wall functions) Between Y+ ~ 6 and 30 one is using the linear some of both profiles according to the relative weightage. For example in reference http://num.math.uni-goettingen.de/ba...ings/knopp.pdf Blending functions has the following values (Equation 7 of reference http://num.math.uni-goettingen.de/ba...ings/knopp.pdf) Y+ = 1 , BF = 0 Y+ = 10, BF = 0.018 Y+ = 12, BF = 0.038 Y+ = 15, BF = 0.094 Y+ = 20, BF = 0.2922 Y+ = 25, BF = 0.626 Y+ = 27, BF = 0.761 Y+ = 30 , BF = 0.909 Y+ = 35, BF = 0.9929 Y+ = 38, BF = 0.9992 Y+ = 40, BF = 0.9998 Blending function is different for different terms. For example in above example, BF was calculated for U+ and Y+. But which ever function is used the basic theory is same. PS : I have already mentioned in one thread that the enhanced wall treatment is good for the Y+ < 10, because for higher values you have increasing weitage of log law and which is not good at predicting the separation. |

/file/read-case-data /home/maghazlani/Analysis/intake_test_3-1-23000.cas

;/define/operating-conditions/operating-pressure 0

;/define/models/solver/density-based yes

;/define/models/energy yes

;/define/models/viscous/kw yes

;/define/boundary-conditions/modify-zones/zone-type 11 pressure-inlet

;/define/materials/change-create air air yes ideal-gas no no no no no no

;/define/boundary-conditions/pressure-inlet inlet yes no 101325 no 27357 no 300 no yes no no no yes 01 0.05268

;/define/boundary-conditions/modify-zones/zone-type 10 pressure-outlet

;/define/operating-conditions/operating-pressure 0

;/adapt/adapt-to-gradients pressure curvature 0 0.7 0.3 yes 100

;/adapt/set/max-number-cells 2000

;/solve/initialize/compute-defaults/pressure-inlet 11

;/solve/initialize/repair-wall-distance yes

;/solve/initialize/initialize-flow

;/adapt/mark-inout-hex yes no 0.000515079 0.205496 0.0156082 0.0451296 -0.000208354 -0.0265887

;/file/auto-save/data-frequency 20000

;/mesh/polyhedra/convert-domain yes yes

;/solve/set/under-relaxation/k 0.5

;/solve/set/under-relaxation/epsilon 0.5

;/solve/set/under-relaxation/turb-viscosity 0.7

;/solve/set/under-relaxation/solid 0.7

;/solve/set/limits 1 5e10 1 5000 1e-14 1e-20 100000 0.05

/solve/iterate 24000

;/display/set/contours/surfaces 0 ()

;/display/set/picture/color-mode color

;/display/set/picture/driver jpeg

;/display/set/contours/n-contour 99

;/display/set/contours/filled-contours yes

;/display/contour mach-number

;/solve/monitors/surface/set-monitor mass-flow "Mass Flow Rate" 0 () no no yes massf 1000

;/display/views/restore-view left

;/display/views/auto-scale

;/display/views/camera/zoom-camera 2

;/display/save-picture /home/maghazlani/Analysis/screenshot-mach-extended_5-4000.jpeg

/file/write-case-data /home/maghazlani/Analysis/intake_test_3-1-47000.cas ]]>

Use the following commands for running a transient simulation:

solve/set/time-step 1e-2 ( for setting the time step size)

solve/dual-time-iterate 100 40 ( 100 is the number of time steps and 40 is the iterations per time steps) ]]>

Quote:

Hex Dominant doesn't really play well with the other methods. You will also have a hard time when you try to merge it with the Hexa method, so you will need to do things in the right order...
Hex Dominant is also really more of an FEA method. It starts from the quad surface mesh and grows into the middle. If you don't have a quad surface mesh, it surface meshes everything first and then starts. It doesn't do the material point flood fill that the octree tetra mesher does. It could give perfect hexa on this simple case, but I am guessing yours is more complicated and you may not like all the junk pyramids and tetras in the middle. If you are sure you want to go with this mesher, you should probably go the other way around... Do the Hexa blocking side first... Then save that mesh. Then delete all the mesh except the faces that touch the portion you want to hexa dominant mesh... (this will be the seeded surfaces). Then surface mesh the rest of the region you want to hexa mesh (you can actually select the surfaces to mesh), make sure to turn on the option to "respect line elements" since this will let your new surface mesh connect properly to your previous surface mesh... Run the Hexa Dominant mesher from the existing mesh... Save that mesh file. Then load the Blocked Hexa mesh... It will ask if you want to replace or merge, choose merge to concatenate the files... Since the interface mesh came from the hexa blocking mesh, it will be exactly aligned, but you still need to merge the nodes. Use Edit mesh => Merge Nodes => Tolerance. Set the tolerance to something very small (like 0.0001) and apply. Then you can delete that interface surface mesh since you won't need it any more... Save the combined mesh and output to your solver. Have fun with it. |

Even if you are copying your mesh around to create an annulus instead of a periodic section, it will be easier if you have nodes matching.

Once you copy the periodic mesh into place, you can merge the sections together by using "Edit Mesh (tab) => Merge Nodes (with a tolerance)". I usually set the tolerance very close to zero and use the "single edges only" option. Don't forget to "ignore projections".

However, you could just interactively merge nodes (especially if you don't have a periodic mesh), however, you may also need to use split edge and move nodes... Still, even the manual method without periodicity shouldn't take too long for this example ]]>

With ICEM CFD, there are always other ways... ;)

1) You could try a Hexa (blocking tab) mesh. This would be really easy if you really wanted to do a sphere in a box... What is your real application? Aircraft shapes, wings, etc. are also pretty easy to do with Hexa blocking. It gives a pure hexa mesh very quickly with the best boundary layers possible and it has very low memory requirements (the lowest of any meshing tool I know).

2) You could try hexa core... Personally, I prefer the transitions in the 12 to 1 conversion, but Hexa Core will use less memory (cartesian algorithm). To use it, start with a Tetra prism mesh, but with a large max size in the volume (to reduce the amount of mesh generated in the volume). Then generate prisms... Then go back into Params by parts and set a max size and turn on hexcore for the volume parts you want to have hexa core in... (other hexa core settings are under global params => volume params => Cartesian => Hexa Core.) This will dump your octree tetras (but keep the surface mesh and prisms). It then uses a cartesian algorithm to generate the hexa core of the right size in the volume, which it then steps back a few layers from the pre existing mesh. Then it uses a delaunay algorithm to fill in the gap between the Hexas and Prisms with Tetras...

3) The ICEM CFD Hexa core isn't as good as the TGrid Hexa Core. TGrid is a bit of a memory pig, but if it is all the same to you, you should try it out. TGrid Hexa core is ideal for Fluent in that it supports hanging nodes, even with adjacent tetras (ICEM Hexa Core only supports haning nodes within the Cartesian region, but fails if they are at the surface adjacent to tetras). The TGrid Hexa Core is also able to go to the walls if they are flat.

4) You could try subdividing your geometry into smaller chunks that your memory capacity could handle. Again, the practicality of this would be dependent on the geometry... ]]>

Quote:

I'm not sure of a way to do that without having data files saved at previous time steps. I haven't used Fluent's tools to create animation, I've always saved contour plots as .png or .jpg images then used 3rd party tools to create animations. I create similar animations however since I am investigating air-lift driven flow and am interested in the behavior of bubbles in the airlift column. I'd suggest visualizing the interface using contours of phase and plot the contours of air ranging from 0 (water) to 1 (air).
Here is a sample journal file for a 2D simulation I ran in batch mode. ;read case and data file/read-case single-48cm-2D-batchtest.cas ; ;initialize domain with air /solve/initialize/initialize-flow ; ;patch water to raceway depth of 3.5 cm /adapt/mark-inout-rectangle yes no 0 0.62875 -0.48 0.035 /solve/patch water () (0) mp 1 ; ;set up image output /display/set/contours/filled-contours yes /display/set/picture/driver png /display/set/picture/landscape yes /display/set/picture/x-resolution 960 /display/set/picture/y-resolution 720 /display/set/picture/color-mode color /views/restore-view front ; ;print front view of phases and velocity magnitude at t=0 /display/contour air vof 0 1 /display/save-picture airvof%t.png /display/contour mixture velocity-magnitude 0 0.5 /display/save-picture velmag%t.png ; ;set up display commands to print front view of phases and velocity magnitude every 20 time steps /solve/execute-commands/add-edit command-2 20 "time-step" "/display/contour air vof 0 1" /solve/execute-commands/add-edit command-3 20 "time-step" "/display/save-picture airvof%t.png" /solve/execute-commands/add-edit command-4 20 "time-step" "/display/contour mixture velocity-magnitude 0 0.5" /solve/execute-commands/add-edit command-5 20 "time-step" "/display/save-picture velmag%t.png" ; ;set up auto-save /file/auto-save data-frequency 500 /file/auto-save append-file-name-with time-step 6 ; ;iterate over 5000 time steps solve/set/time-step 0.001 solve/dual-time-iterate 5000 50 file/write-data single-48cm-2D-batchtest.dat |

Quote:

Have figured it out. Here is the script, hope it might be of help to others:
#include "udf.h" DEFINE_PROFILE(inlet_temperature,t,i) { real x[ND_ND]; real time; face_t f; begin_f_loop(f,t) { F_CENTROID(x,f,t); time=CURRENT_TIME; if(time<40) F_PROFILE(f,t,i)=300; else F_PROFILE(f,t,i)=340; } end_f_loop(f,t) } |

Then load the old blocking file and update the associations.

If they don't all auto associate properly, you can use the interactive controls to adjust things... ]]>

By default, you should probably leave it as 1, which means that all the other sizes could be taken at face value.

Then if you want to make the model 20% coarser, you can come back and set the scale factor as 1.2. Get it?

Scale Factor times Max Element size gives you the largest element in your model. For Octree, this is the size of the initial subdivision when Octree is first initialized. You can read more about this in the theory sections of the Help.

If scale factor is larger than 1, it increases the mesh sizes throughout the model. If it is less than 1, it reduces them. Note, it changes how they behave, but it doesn't change the nominal values as you see them in the UI. ]]>

The Hexadominant method starts by paving quads at the walls and then marches inward with isotropic hexas which can crash somewhat badly in the middle. This is good for FEA structural analysis where most of the interesting stuff happens near the surface and uniform elements are sufficient to capture it all, but it is not good for CFD.

However, if you tried Multizone (or sweep) with an inflation layer, you could get a nice combination of swept hexas that would produce a good mesh for CFD.

by Simon ]]>

I use it here a lot and it works fine.

Cheers.

By BRUNOC ]]>

Prism quality looks worse than it is... No worries there either.

As for the poor tetras, run the smoother with the prisms and hexas frozen... see what it can do. It should fix most tetras. The tetras it can't fix are those that are stuck between close layers of prisms...

In that case, you may want to free up the prisms to be smoothed, but greatly reduce the "upto quality" number. So instead of trying to smooth just the tetras with an upto of 0.6, turn on prism smoothing and set the upto down to 0.1...

If the problem is just that your prism layers get too close together and there isn't room for the smoother to get any improvement, you could try regenerating the prisms with auto-reduction (an advanced option) on, or adjust the height or number of layers in that area.

Best regards,

Simon

By simon ]]>

Keeping in view they have (meshing guys) have devised some metrices with factor of safety. Following are the most important quality metrics with thier range.

Angle : Must be greater than 18 deg (But some times Fluent sometimes can work as low as 9 deg)

Quality : Must be greater than 0.3. But quality of 0.2 or greater can be tried and if solver doesn't mind it than carry on.

Expansion rate: Change of cell volume with respect to neighbouring cells. Should be less than 10 (or 20 check the CFX of Fluent manual).

Skewness : (Fluent) Must be less than 0.8. Up to 0.95 is acceptable.

Aspect Ratio : Must be less than 100 for single precision solver and 1000 for double precision solver. But due to implementation of better algorithms, I have tested and found that AR up to 8000 is OK in boundary layer with no impact on the solution. Moreover it is the characteristics of boundary layer to have the smaller cell size in the normal direction where flow gradients are steep and have bigger cell size in stream wise direction as flow gradients are not sharp. This approach is not valid for the transition prediction where you requires ten time more mesh points in the vicinity of transition.

Important thing to remember:

If you are using Fluent then make the mesh which has orthogonal quality greater than 0.01. To ensure this you need to have the

1. Min quality greater than 0.3

2. Angle greater than 18

3. Smooth cell size change

For CFX, you need to ensure this in ICEM:

1. Min quality greater than 0.3

2. Min angle 18

3. Smooth cell size change

By Far ]]>