CFD Online Discussion Forums

CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   Main CFD Forum (https://www.cfd-online.com/Forums/main/)
-   -   Pressure drop not correctly predicted (https://www.cfd-online.com/Forums/main/96634-pressure-drop-not-correctly-predicted.html)

rks171 January 27, 2012 13:01

Pressure drop not correctly predicted
 
I'm modeling an experiment with a flow obstruction and predicting the pressure drop that was measured in that test section. However, my prediction is about 26% under-predicted from the experimental results. This struck me as odd. I thought that pressure drop would be predicted better than experimental velocity, but experimental velocities are predicted to within 10%. What could I change in my simulation to improve my prediction of the pressure drop. I tried refining the mesh by doubling the number of cells, but that didn't cause any improvements in pressure drop prediction (but did improve the velocity prediction).

ogloth January 27, 2012 17:19

If velocities and flow topology are predicted correctly, my bet would be on skin friction. Check that the y+ requirements of your turbulence model are fulfilled. Good luck!

rks171 January 27, 2012 18:36

Well, the accuracy of the flow velocity prediction is within about +- 10% for like 80% of the measurement points. I don't know if this should be considered good or not. I was using the realizable k-epsilon model with two-layer all y+ wall treatment. What kind of y+ values should I be targeting? My y+ values are between ~0.1 and 40. I thought that small and large y+ values were okay for all y+ wall treatment.

Another problem I just picked up on is my wall temperature calculations are about 30 Celsius off from experimental results, which seems like an awful lot to me. The surface area average temperature of the flow area matches up to experimental values within 1 Celsius, though, so it looks to me that the heat transfer coefficient at the wall is being badly predicted in addition to the skin friction coefficient, which again makes me think there's something wrong with the wall treatment. Any suggestions on how to improve that problem?

Martin Hegedus January 27, 2012 18:47

What's your Mach number and what is the blockage ratio? And is your flow incompressible? If it is incompressible, average velocity coming in has to equal average velocity going out, assuming constant area. Conservation of mass. And the delta p is related to total loses. So how well are you predicting the drag on the obstruction and the WT walls? It is not surprising your velocity is better predicted than pressure loss. Also, how do you define pressure loss? Is it the average pressure loss upstream and downstream over the entire cross section area?

Martin Hegedus January 27, 2012 18:57

Quote:

Originally Posted by rks171 (Post 341586)
Well, the accuracy of the flow velocity prediction is within about +- 10% for like 80% of the measurement points. I don't know if this should be considered good or not. I was using the realizable k-epsilon model with two-layer all y+ wall treatment. What kind of y+ values should I be targeting? My y+ values are between ~0.1 and 40. I thought that small and large y+ values were okay for all y+ wall treatment.

Another problem I just picked up on is my wall temperature calculations are about 30 Celsius off from experimental results, which seems like an awful lot to me. The surface area average temperature of the flow area matches up to experimental values within 1 Celsius, though, so it looks to me that the heat transfer coefficient at the wall is being badly predicted in addition to the skin friction coefficient, which again makes me think there's something wrong with the wall treatment. Any suggestions on how to improve that problem?

Maybe it would be good if you posted a picture. The topics of blockage and pressure loss is kind of generic.

rks171 January 27, 2012 19:29

Mach number, I don't know... it's incompressible and the Reynolds number is about 100,000. Blockage ratio, I think, was about 30%, but I'll have to get back to you on that if that's important. I'll tell you what I did notice about the velocity predictions, though. In areas of the flow region where there is a higher ratio of wall to open flow area, the velocities are more severely under-predicted compared to the regions where there is a higher ratio of open flow area to wall, which is almost predicted perfectly. It seems like it's the shear at the wall that isn't being predicted right. Mass flow rate is conserved, I checked on that already.

The experiment had pressure drop measurements taken over several sections of the flow area - each section having the same type of flow blockage in it (a grid of complicated geometry). I'm calculating pressure drops over those same sections.

Interesting that you say it isn't surprising that pressure drop is poorly calculated compared to velocity. Why is that exactly?

What of the poor heat transfer between the wall and the fluid, resulting in the severe over-prediction of wall temperature? Can that problem not be related to the poor pressure drop prediction by a bad modeling of the wall effects in the turbulence model?

I don't think I'm going to be able to post a pic of this, as you requested, but think of metal straps with their thin sides oriented with the direction of the flow.

Martin Hegedus January 27, 2012 20:14

Quote:

Originally Posted by rks171 (Post 341591)
Interesting that you say it isn't surprising that pressure drop is poorly calculated compared to velocity. Why is that exactly?

Conservation of mass and incompressibility. The flow into a stream tube must be equal to the flow coming out of the stream tube if the inlet and outlet areas are the same. If the velocity coming out of the stream tube is faster/slower than going in, then the area must have decreased/increased. Since you are in a confined area, that's tough to do. The flow is not as sensitive to it. Of course that is steady flow, but gives you an idea. The time averaged values for unsteady flow will behave sort of similar.

However, pressure is sensitive and is a reflection of the loss across the blockage (grid of complicated geometry)

So by saying that you are not capturing the pressure you are also saying that you are not capturing the drag on your blockage. Yes, there are possibly wall temperature affects, but if you can not get the drag on the blockage you have no hope of getting the pressure drop.

Not sure of what your blockage is, but it sounds like some sort of radiator. What is your Reynolds number length based on? If you are incompressible, it does not sound like your Reynolds number is based on the fins of the radiator (i.e. grid). What is your y+ on the radiator fins/metal straps? Sorry, your y+=1 for your metal straps must be based on a representative length of the metal straps. If I understand this correctly, you are going to have a lot of small cells. Ouch.

rks171 January 28, 2012 08:06

That all makes sense.

My Reynolds number is based on the hydraulic diameter of my flow channel. Should I be aiming for a certain y+ value on the straps in order to better predict the wall drag? Is there a problem with having lots of small cells (aside from the computational demand)? Could that be the cause of my inability to correctly capture the wall shear?

Martin Hegedus January 28, 2012 11:59

Quote:

Originally Posted by rks171 (Post 341631)
That all makes sense.

My Reynolds number is based on the hydraulic diameter of my flow channel. Should I be aiming for a certain y+ value on the straps in order to better predict the wall drag?

Depends if you are using wall functions or not. In general I do not recommend wall functions for complex geometries. If you don't use wall functions, then start of with y+ equal to 1 taken at about 1/4 of the characteristic length scale of your metal strap.

Quote:

Is there a problem with having lots of small cells (aside from the computational demand)?
In general, no, just computational demand

Quote:

Could that be the cause of my inability to correctly capture the wall shear?
I'm not exactly clear on your overall geometry so it is hard for me to say. However, pressure drop and entropy rise (i.e. temperature) is a very important part. This provides the "stuff" which changes everything downstream.

Another part is the equation of state. I deal with air and it sounds like you deal with a fluid (water?). It sounds like your blockage is significant in terms of creating entropy. For air, I would recommend using the compressible equations. I don't have a suggestion for a liquid. It sounds like your problem, in air, would cause a significant pressure drop and some sort of temperature rise. By the equation of state (perfect gas law), the density ratio of upstream and downstream would be affected noticeably, i.e. (rho1/rho2)=(p1/p2)*(T2/T1). Both the pressure drop and temperature rise push the density down. This then feeds back into the the other equations. I'm not sure what your equation of state is. But, for example, if your equation of state is equally sensitive to everything, like the perfect gas law, you need to use the full set of compressible equations. If density is not sensitive to pressure or temperature changes then you need to use the full set of incompressible N.S. equations. By this I mean that you need to include the energy equation and link it to the others by the equation of state. I assume you've done this since you have mentioned temperature. Sometimes the energy equation is dropped by neglecting temperature. And, finally, if temperature is not sensitive to pressure or density changes then the energy equation becomes decoupled from the other equations. In short, you need to use the right set of equations.

Turbulence modeling. And I'm referring to turbulence modeling around the obstruction, i.e. wake/eddy modeling. It will determine the drag on the obstruction. It will also determine how your pressure loss and entropy loss mix in with the rest of the flow. But, how important that second part is depends on where your measurements are taken. It sounds like your Re number is low. So, around the obstruction you may not have much choice in regards to turbulence modeling. Or I should say wake/eddy modeling. You will need to go with unsteady laminar (i.e. not LES or RANS) flow and do the best you can with the resources you have. All of which is maybe what you are doing now.

Oh, and at this point, all of my suggestions are just my opinions. It seems the physics you are modeling are complex due to significant interactions.

Martin Hegedus January 28, 2012 12:08

BTW, you mentioned flow channel. Does this mean something like a water channel with the top surface exposed to air?

If so, there is a lot going on. Much more than I thought when this thread started!

Martin Hegedus January 28, 2012 12:29

Ooops, I should be clear. Assuming adiabatic wall conditions, the only ingredient added by the obstruction is entropy.

rks171 January 28, 2012 12:58

Quote:

Depends if you are using wall functions or not. In general I do not recommend wall functions for complex geometries. If you don't use wall functions, then start of with y+ equal to 1 taken at about 1/4 of the characteristic length scale of your metal strap.
Do I have a choice whether or not I use a wall function? I thought I needed to use one with the k-epsilon turbulence model. Right now I'm using two-layer all y+ wall treatment.

As for temperature, I should clarify, I'm running several cases. Some are heated and temperature is important. Some are unheated and temperature change is insignificant. The flow is incompressible (water). For the case where I'm comparing the pressure to the experimental values, the case is not heated and so I don't include the energy equation, just continuity and momentum.

I don't think my Re is low (100,000), well into the turbulent region. I used k-epsilon first, but I think I'm going to try out k-omega and see what difference that makes.

There is no air above the vertically oriented channel, it's totally filled with water. It is a flow loop.

Martin Hegedus January 28, 2012 14:03

To me, it sounds like you have TWO things you are trying to model. You are trying to model the obstruction and you are trying to model the channel. The obstruction just happens to be in the channel. The Reynolds number of 100,000 is for the channel since it is based on the hydraulic diameter of the flow channel. That will give you the y+ and turbulence model requirements for the CHANNEL WALLS. However, 100,000 is NOT the Reynolds number for the obstruction. The Reynolds number for the obstruction MUST be based on a representative characteristic of the OBSTRUCTION. For example, the maximum width of your metal staps. Or another way to think about it is this. How would you model the obstruction if it was not in the channel?

BTW, I goofed on the entropy statement. The obstruction is adding entropy and enthalpy because of work done by viscosity. I was thinking Euler equations.

Martin Hegedus January 28, 2012 15:55

Have you tried comparing to an experiment without the obstruction?

rks171 January 28, 2012 17:28

Yes, I'm sorry I was not clear about that before. I'm trying to model the entire experimental test section, which includes both the obstructions and the bare flow region. You are correct that Re is based off of the hydraulic diameter of the bare region. However, I thought since the velocity increases in the vicinity of the obstruction and since there is a small drop in the hydraulic diameter, then the Re doesn't drop much or at all. Certainly, I'm sure the flow is not laminar in that region of the obstruction. The obstruction acts to increase mixing of the fluid in its vicinity. But I guess what it comes down to is what should my y+ values be for the bare region and the obstruction region? Or how could I find that out? Is it possible that my y+ values of <1 are actually too small?

As for your question about the behavior outside of the obstruction (in the bare region), my calculated wall temperatures for the heated walls are consistently off by about 30 degree celsius, even far away from the obstructions (>15 L/D). For all I know, the obstruction modeling might not even be a problem, or a very small one. It might mainly be wall shear and wall heat transfer being incorrectly modeled in the bare region (which is much larger than the region of the obstruction).

Martin Hegedus January 28, 2012 21:59

It is my opinion that on the wall of the channel in the region of the obstruction (assuming the obstruction goes all the way to the wall) and on the obstruction itself you can not use wall functions. Therefore the y+ must be less than 1.

akj January 28, 2012 23:37

as far as i know ur y+ values should be between 0- 5 then only u will get satisfactory results. The reason for this may be the mesh so i thnik u have to refine ur mesh.

Regards,

Anil

rks171 January 29, 2012 11:19

I've seen other posts where ppl said y+ should be above 30 always, however, I think that's if you're using a high y+ wall treatment. I'll check how the y+ differs between the obstructions and the wall of the test section.

The obstruction does touch the wall, but the obstruction is less than 10 mm long whereas the bare section of the flow area is over 200 mm long. I'm running a case using k-omega. I'll see how that works out and report back; however, I'm still using the same wall treatment. I'll also toy with my prism layers and see what that does.

Thanks for all the suggestions. I'll report back with what happens.

Martin Hegedus January 29, 2012 15:49

I'm assuming your obstruction is a grid of metal straps with the wide flat side facing the flow vector, this is used for mixing, and you mentioned that the obstruction is 30%.

The Cd for a flat plate, with the flat side facing into the flow vector is 1.98 based on its surface area. The Cd based on a reference area of your cross section is basically 0.6 (i.e. 0.3*1.98). This corresponds to a Cp loss of 0.6. This value is also about the same as the head loss of a thin plate orifice with a beta of 0.5 (i.e. sqrt of 30%)

From the Moody Chart, assuming smooth walls and an Re of 100,000 based on diameter, the friction factor is 0.018.

Therefore it takes 33.3 L/Ds (0.6/0.018) before the head loss from the channel walls is equal to the head loss from the obstruction.

I guess I don't understand why your focus is on the walls. Head losses through orifices, and that is what your grid of straps sounds like, is huge. So, I guess I totally misunderstand your geometry and/or what you are measuring.

Here is another way to look at it. Compare the drag due friction on your channel walls to the drag of your obstruction. This may help direct your focus.

Martin Hegedus January 29, 2012 16:32

I'm going to be straight forward and I am sorry if I misunderstand things. And, I know I don't understand the big picture. But it sounds like you believe that only the wall friction contributes to the head loss. This is not true. Separation from the obstruction also contributes. And it could contribute a lot.

ogloth January 30, 2012 02:50

The fact that the heat transfer is incorrectly predicted does speak in favour of a problem with the wall treatment.

To be honest, I have zero experience with the so called all y+ wall functions, but I have heard other people complain about it. You could try to test your wall function on a very simple flow (e.g. circular pipe); then try different values for y+ (1,5,10,30,100,...) and see how it influences the skin friction.

If the all y+ wall functions are the problem you can then go for either low Re (y+ app. 1) or high Re (30 < y+ < 300) approach. From what I have read about your flow configuration, high Re is probably sufficient; I would try to aim for a y+ of maybe 50.

rks171 January 30, 2012 09:42

Of course, the form loss is a big contributor to the head loss. I am aware of this, but you do raise a good point. I checked the bare section pressure drop to the section with obstruction. A section having the obstruction has a pressure drop that is three times larger than a section without the obstruction (same lengths). This prompted me to investigate what is the culprit here. So I compared my calculated bare section pressure drop to experimental bare section pressure drop. It is still pretty far off (18% under-predicted). My calculated pressure drop for the bare section + obstruction was under-predicted by by ~27%. Just comparing the calculated and experimental pressure drop of the obstruction (by subtracting the pressure drop of the bare section from the section with the obstruction), I under-predict the pressure drop of the obstruction by ~8%.

This makes me think that the wall friction definitely has to be being predicted wrong. I tried running a case with k-omega, but it failed because I didn't properly select a free stream edge. That seems like opening up a whole other can of worms right now, so I'm going to put that on hold. I have a case running right now where I added a third prism layer and made the layers thicker to increase y+. I'm going to try what the other poster said and use a high y+ wall treatment and try and greatly increase my y+ values to about 30 or more.

Martin Hegedus January 30, 2012 11:25

What's the L/D for these sections?

Also, I am uncertain on your calculation of pressure drop percentage.

Defininitions
Experimental values:
dp_eo: delta pressure (head loss) of the section with the obstruction of your experiment.
dp_eb: delta pressure of a bare section of your experiment (a section without an obstruction)

Calculated values
dp_co: delta pressure (head loss) of the section with the obstruction of your calculation.
dp_cb: delta pressure of a bare section of your calculation (a section without an obstruction)

OK, so

dp_eo = 3*dp_eb
-18% = 100*(dp_cb-dp_eb)/dp_eb
-27% = 100*(dp_co-dp_eo)/dp_eo

Therefore the error of your wall friction of your bare section relative to the obstructed region is:

100*(dp_cb-dp_eb)/dp_eo = (1/3)*(100*(dp_cb-dp_eb)/dp_eb) = -6%

Therefore the wall friction accounts for only -6% of the experiment and the obstruction alone accounts for -21%. Or is my original understanding wrong?

But, I find the following hard to believe
-18% = 100*(dp_cb-dp_eb)/dp_eo

Since that would mean,

100*(dp_cb-dp_eb)/dp_eb = -54%!!

rks171 January 30, 2012 12:53

This was right...

Quote:

-18% = 100*(dp_cb-dp_eb)/dp_eb
That's how I calculated the error of the calculated bare section pressure drop.

This...

Quote:

-18% = 100*(dp_cb-dp_eb)/dp_eo
...I did not do. I'm not sure what the reasoning would be for dividing by the pressure drop of the bare+obstruction section.

The pressure drop of the bare section was measured. The pressure drop of the obstruction was not "directly" measured. It is calculated by subtracting the bare section pressure loss from the pressure loss of a bare+obstruction section...

dp_eoo = dp_eo - dp_eb

where dp_eoo is the pressure drop across only the obstruction and dp_eo is the pressure drop of both the bare section and the obstruction that is within the section.

The L/D of a section (including the bare portion and the obstruction) is about 23. The L/D of the obstruction is less than 1.

Martin Hegedus January 30, 2012 13:23

Quote:

Originally Posted by rks171 (Post 341963)
...I did not do. I'm not sure what the reasoning would be for dividing by the pressure drop of the bare+obstruction section.

I agree with that! So the error of your obstruction is 21% rather than your claimed 8%.

Proof:
I assume this is how you calculated your overall loss

-27%=100*(dp_co-dp_eo)/dp_eo

and I'm assuming

dp_eo = 3*dp_eb

using your definition dp_eoo = dp_eo - dp_eb and therefore dp_eo=dp_eoo+dp_eb

and creating another definition dp_coo = dp_co - dp_cb and therefore
dp_co = dp_coo+dp_cb

therefore
-27%=
100*(dp_co-dp_eo)/dp_eo =
100*((dp_coo+dp_cb)-(dp_eoo+dp_eb))/dp_eo =
100*((dp_coo-dp_eoo)+(dp_cb-dp_eb))/dp_eo=
100*(dp_coo-dp_eoo)/dp_eo + 100*(dp_cb-dp_eb)/dp_eo=
100*(dp_coo-dp_eoo)/dp_eo + 100*(dp_cb-dp_eb)/(3*dp_eb)=
100*(dp_coo-dp_eoo)/dp_eo + (1/3)*100*(dp_cb-dp_eb)/dp_eb

restating it
100*(dp_coo-dp_eoo)/dp_eo + (1/3)*100*(dp_cb-dp_eb)/dp_eb = -27%
therefore
100*(dp_coo-dp_eoo)/dp_eo = -27% - (1/3)*100*(dp_cb-dp_eb)/dp_eb
therefore
100*(dp_coo-dp_eoo)/dp_eo = -27% - (1/3)*(-18%) = -21%

So,

Your loss due the obstruction for the obstruction section is -21% and not -8%.
Your loss on the channel walls for the obstruction section is -6%.

rks171 January 30, 2012 14:46

I appreciate your effort on coming up with that proof, but please hear out my methodology. First, I must appologize because when I wrote that previous post saying that the total section pressure drop was 3 times that of the bare section, I was looking at the wrong pressure transducer. The total section pressure drop is actually only ~1.5 times the bare section.

However, what I did was I looked at the calculated pressure drop over a bare section (dp_cb) and I looked at the calculated pressure drop over a section having the bare portion and the obstruction (dp_cbo). I calculated the pressure drop of the obstruction by taking dp_cbo - dp_cb. I did the same for the experimental results (having a bare section measurement, dp_eb, and a bare + obstruction section, dp_ebo). The pressure drop of the obstruction is dp_ebo - dp_eb. Comparing the obstruction pressure drop for experimental and calculated, in this manner, gave me a difference of about 6%. Comparing the bare pressure drop for experimental and calculated gave me a difference of about 18%.

I think, at the least, it's very clear that there is something wrong with the modeling of the bare region, as the difference between experiment and CFD that I'm seeing is an apples to apples comparison and the difference is quite large. I have an additional case submitted using a thicker prism mesh layer to increase the y+ values (and using the high y+ wall treatment). It is still waiting to start. I think I will have results tomorrow. I will post back with the effects of playing with the prism layer and the wall treatment.

Martin Hegedus January 30, 2012 16:11

1 Attachment(s)
Yes, please do post back! This is interesting.

LOL, I'm a man of pictures, equations, and plots. This painting with words is just not my cup of tea! :)

Just for kicks I've attached the picture in my head. Sorry for horrible penmanship! Too many computers and spell checkers in my life.

rks171 January 30, 2012 16:33

The drawing you posted is a little off. The 'obstruction zone' that I was referring to is the area in the direct vicinity of the obstruction (red rectangle in the picture below). Everything else would be the bare flow area (blue area). The pressure measurement is made over bare section + obstruction. Note that there is also a pressure measurement made simply over a bare section (not shown), which is what I was using to determine how accurate the prediction of bare region pressure loss is (which should have only frictional loss, since there are no flow blockages present in that section).

http://i898.photobucket.com/albums/a..._schematic.jpg

As I believe I mentioned before, there was a case that was unheated and a case that was heated. Pressure prediction was bad in both cases. The walls aren't perfectly adiabatic (close enough though); however, for the unheated case that wouldn't make a difference anyway.

Martin Hegedus January 30, 2012 17:29

Thanks for the clarification.

Quote:

Originally Posted by rks171 (Post 341998)
As I believe I mentioned before, there was a case that was unheated and a case that was heated. Pressure prediction was bad in both cases. The walls aren't perfectly adiabatic (close enough though); however, for the unheated case that wouldn't make a difference anyway.

Yup, that was understood. That's why I focused on what goes into the pressure. Plus, the energy equation is not used for your incompressible unheated case.

For the picture I showed, and yours, there is a good chance for a recirculation region to occur both in front and behind the obstruction. This will create low (flow stagnating) or negative (circulation region) skin friction on the channel wall. You'll know by looking at the velocity contour plots.

I didn't think wall functions where good at this. But I don't know the ins and outs either.

I believe, to capture this, you will need to go with, at a minimum, SA or SST. That's why I'm say y+ < 1. That's also why I'm saying you need to be careful about capturing the obstruction and the channel region around it. I'm not sure what a "free stream edge" is, but that's probably because I'm an external aerodynamicst. SST may give you better results than SA, but SA is easy to use, at least from my perspective. Though, SST isn't that much harder. More of an issue of CPU time. But, SA does only have one variable, eddy viscosity (or it's form of it) so passing eddy viscosity from one turbulence model to SA might be easy. Not sure about SA to other turbulence models. And, I'm not sure how you are building up the velocity and eddy viscosity profiles for the input to your channel. Finally, maybe "wall functions" means something different to me than you.

Discussion about k-epsilon and k-omega:

http://www.cfd-online.com/Forums/mai...ga-models.html

Good Luck!

rks171 January 31, 2012 10:01

Thanks for the link. The first case that I ran, where I added a third prism layer, didn't solve nicely at all. After 2E6 CPU seconds, the solution only reached 230 iterations and crashed due to insufficient memory. It seems odd to me that simply adding another prism layer would cause such strange behavior (the total cell count is 6.7 million vs. 6.2 million the first time I ran this case (which was successful)). I can't remember changing anything else besides the mesh. Anyways, I'm still waiting to get results from the unheated case using the thicker prism layers (higher y+ values and high y+ wall treatment).

sail January 31, 2012 16:29

Quote:

Originally Posted by rks171 (Post 342104)
Thanks for the link. The first case that I ran, where I added a third prism layer, didn't solve nicely at all. After 2E6 CPU seconds, the solution only reached 230 iterations and crashed due to insufficient memory. It seems odd to me that simply adding another prism layer would cause such strange behavior (the total cell count is 6.7 million vs. 6.2 million the first time I ran this case (which was successful)). I can't remember changing anything else besides the mesh. Anyways, I'm still waiting to get results from the unheated case using the thicker prism layers (higher y+ values and high y+ wall treatment).

i might have lost something in the thread, but you are running a fully resolved boundary layer simualtion (y+ < 1) with just three cells in the BL?

i'd say that 3 cells are not enough even when WF are used, but with a lo-Re turbulence model you'll need at last 20 cells in the BL

could you please post, for the sake of clarity, a screenshot of your BL mesh near the wall of the pipe?

rks171 January 31, 2012 16:50

As far as I know, no I'm not fully resolving the boundary layer. I selected the all y+ wall treatment. It's a wall function. And the thickness of those two prism layers (originally 2) was about 2.5E-4 m, I believe, with the near-wall layer being about 1E-4 m. I have another case waiting where I increased that thickness.

rks171 March 19, 2012 17:03

Sorry for taking so long to reply back to this thread, but I got tied up in some other work and never ended up getting to that thicker wall prism layer case until now. To sum up the overall results I'm looking at so far:

1. I ran 3 cases, a base case with 2 prism layers, total thickness of layers was 2.5e-4 m, a refined case with 3 prism layers having total thickness of 1.9e-4 m, and a thicker case having 2 prism layers w/ total thickness of 5 e-4 m.
2. No significant difference in prediction of pressure loss was seen between any of the three cases
3. Cross-sectional averaged velocity in different locations of the test section were consistently significantly better using the refined mesh

So it doesn't appear my meshing is affecting the pressure drop. Maybe it comes down to the turbulence model.

USU_CFDer March 23, 2012 17:49

Some thoughts on you problem
 
So I am currently working on my Master's Degree doind simulations on a similar setup thought it sounds like for a totally different application. Also it sounds like you are using Star-CCM+ as your solver.

To clarify a little on the wall funtions the all y+ wall treatment is supposed to determine based on y+ whether a wall funtion is necessary or not. That is you should be able to have bl cells with a y+ that is valid for both no wall funtions and wall funtions and the solver will determine the appropriate handling. My personal experience however is that it is not very successful at doing this. It works better if you plan out your simulation to be either a fully resolved bl (y+ ~1 everywhere) or wall funcitons (y+ ~30-100 everywhere). In practice either of these things can be very difficult to accomplish. That is you will likely have some bl cells that are not appropriate for wall functions of a fully resolved boundary layer solution. In these cases it has been my experience the the all y+ model works well.

My recomendation would be to generate a mesh that fully resolves your boundary layer everywhere on the channel and the obstruction (average y+ ~1) and select the all y+ model. When you do this you will need to pay close attention to cell aspect ratio, try to keep it >= 0.1 as much as possible. Keep in mind thought that to much larger than that and you are probably wasting cells. From my expericence the trimmer meshing model provides the easiest controls to generate these meshes and trimmer meshes tend to solve fast as a bonus. Also if you make a mesh like this you will need at least 12 bl cells.

I would agree with you that it is your wall treatment that is the source of your problems, and that it is likely your mesh that is the issue as this is the most determining factor on how well a boundary is simulated.

I was having similar problems with my work and these practices have greatly improved the quality of my results.

USU_CFDer March 23, 2012 17:56

Also I forgot to ask you what level of convergence are you getting in your residuals?

rks171 March 23, 2012 19:03

I'm going to have to get a lot better at making my meshes. Having 12 prism layers to resolve the BL is just not going to work for my model. It would be way too computationally expensive. Converging the models I have now take about 7 million cpu-seconds. I tried to jack up the y+ values by increasing the prism layer thicknesses, but even after upping the thickness to a ridiculous amount, I still am left with a ton of y+ values below 1. I rationalize that it's due to the very low velocities in some regions of my models where there are crevices and due to the definition of the y+ parameter. So I don't know how else I can increase y+.

I finished compiling my results recently and I ended up with 3 different ways to model the same heated test:

1. a base mesh of 7 million cells
2. a refined mesh of 15 million cells (added a prism layer to make 3 layers total and reduced the thickness of them to get more refinement near the wall)
3. the same mesh as #1 but using standard k-epsilon instead of realizable k-epsilon

Results show that using configuration 2 didn't change the prediction of the pressure drop. Nor did it improve prediction of the wall temperature. But the temperature gradient throughout the channel cross-section became more uniform than configuration 1.

Using standard k-e, though, had a significant impact on pressure drop prediction, improving it by 10%. Also, it greatly improved prediction of wall temperatures and led to a much more uniform temperature gradient.

The residuals for my cases look pretty decent. The better indicators, I think, are the inlet/outlet mass balance, channel velocities w/ respect to iterations, and channel temperatures w/ respect to iteration. They all go to constant values, giving the indication of convergence.

USU_CFDer March 24, 2012 16:02

I would have to take a slightly different position than you on what your three cases mean. Your first two sound to me like they are handling the boundary region almost identically. Two or three prism layers are almost the same. And yes you did refine them but not in the manner that would make there handling of the near wall region significantly different. Thus it makes sense to me that the results were similar. As far as changing the turbulence model from the realizable to the standard k-epsilon model I believe this actually supports the idea that it is your bl mesh. My understanding (and I didn't take the time to verify this) is that the difference between the two models is entirely in how the near wall region is handled. This tells me simply that for your meshing strategies the standard k-epsilon modle is better suited for your boundary layers. The question then becomes are the results you are getting sufficiently accurate for your application. Only you can answer this but if they are not I am almost positive that it is only by significantly improving your mesh that you can get truly accurate results.

A few ideas that I have learned along the way are

1) when in doubt always assume the mesh is your problem, this is the single most influencable aspect of CFD and it has critical implications for accuracy.

2) Boudnaries are everything. If you think about it almost all CFD is 95% identical, you start out with the same set of equations. The 5% that is differnet are the boundary conditions (sometimes also initial conditions) these conditions contain all of the information that distinguishes flow over and airfoil from blood flow in an artery (minus constituitve models). Boundary condition information is the solution. If you are only moderately modelling your boundary conditions nothing you do will give you better than moderate results. And for the most part the success of modeling the boundary for a no slip wall depends enritely on the mesh.

I just don't think you can ignore the need to resolve your bl, I am speaking from experience as my own research closely mimics your own. I am studying vortex generation as a means of increasing mixing in straight channels. Until i bucked down about my bls i wasn't getting very good results.

Now that being said I was suprised that by adding one prism layer and refining the cell size you went from 7 to 15 million cells. I don't think that is necessary to get a really refined bl. True you will increase mesh size but you should be able to do it a lot more reasonably. If you are using Star then set your refence values for your prism layer boundary mesh seperately from those of the region. The prism mesh should be made based on absolute numbers and not relative to the base size. Also at a very minimum your obstruction and your channel walls need bl meshes designed specifically for them, you cannot make your prism layers for both with the same values. If you change the target surface size on you walls and the growth rate from the walls you can likely have a well refined bl without making your channel cells too small as well. Anyway just some thoughts. If your results are already good enough then all this is acedemicm, but if not I am almost positive that your mesh is what will make all the difference.

One final thought is that for confidence your residuals really all need to be below 1e-3.

Sorry about the length, just some thougts.

rks171 March 24, 2012 17:55

Thank you for the information. I agree that the mesh is a likely culprit considering the problems that I had with just obtaining convergence that were solved by mesh modification. At this point, I'm just not really sure how to model my near wall region any differently to expect any different wall drag and heat transfer results.

Quote:

I just don't think you can ignore the need to resolve your bl
With people saying that I'm going to need something like 12 prism layers to resolve the boundary layer, I just don't think that's going to be feasible for my model to resolve the bl. I simply do not have access to the computational requirements for that kind of mesh. Therefore, I need to rely on wall functions and optimizing my mesh to work with them.

Quote:

Now that being said I was suprised that by adding one prism layer and refining the cell size you went from 7 to 15 million cells.
I didn't only make the prism layer base size smaller, but I also reduced cell sizes throughout the entire mesh. This led to better predictions of velocity. Also, more energy was pulled away from the near wall region and made for a more uniform channel temperature gradient.

Quote:

If you are using Star then set your refence values for your prism layer boundary mesh seperately from those of the region. The prism mesh should be made based on absolute numbers and not relative to the base size. Also at a very minimum your obstruction and your channel walls need bl meshes designed specifically for them, you cannot make your prism layers for both with the same values.
Yes, I do all of this already. Particularly, I use a more refined mesh in the obstruction region. But thank you for pointing out these points.

Quote:

One final thought is that for confidence your residuals really all need to be below 1e-3.
This last point, I'm really not so sure about. I've read in several places on here and in the Star-ccm+ manual that the actual values of the residuals aren't necessarily important and that you can still obtain convergence without having residuals reduced to some specific value. This is due to the fact that the residuals are normalized values. This is why I also watched actual flow field parameters and made sure they were settling into consistent values (if they should be behaving like that).

USU_CFDer March 24, 2012 19:33

Two things, it really should be possible to generate a mesh the completely resolves the bl and is only 30-35% larger than your current meshes. Which while larger shouldn't be completely prohibitive. That was going from no prism layer to 12 prism cells. When I switched from trying to have y+ of about 30 to y+ of 1 it was an increase on this order. I really think you need to play around with their meshing tools until you get a good balance having been there I know it is possible. Also I sould sacrifice some of the resolution in the domain in order to get better resolution at the wall if I had to.

In regards to the residuals I have heard those arguments before, in fact I will be joining CD-Adapco as an application engineering in about a month. I know that for them they are big on the engineering parameter as a means of judging convergence. In my opinion you have to use both engineering parameters and residuals. You may already be aware of this but residuals are not simply the normalized change in solution from iteration to iteration. They are in fact a normalized represention of how completely the solution satisfies the governing equations. This is of critical importance because the governing equations are the only reason we have for attempting to model these phenomena at all. The bulk of Star users are on the industrial side of things and tend to be more satsified with CFD that is moderately accurate, after all this often times is all they require. Thus they focus more on the flow parameter of interest and don't bother with the rigeur required to converge the residuals. However, on the acedemic side this is not the case, and if it is your intention to publish your research I think you will find that journal reviewers care very much about the residuals. The number I stated is based of the minimum requirements for publication in Journal of Fluids Engineering and is only one of several requirements (they also have guidlines regarding the flow parameter of interest). Until your residuals are truly are converged satifactorily I don't think you should expect to get too much more accurate results, in industry +- 10% is often acceptable, but it sounds like for you it is not.

If I am just repeating information to you that you already know I apologize. Just trying to be helpful.

rks171 March 25, 2012 15:11

I was not aware that it would be possible to fully resolve my BL with such few cells. It's something I'm just going to play around with in the future. I won't have the time to do that for a while, but it's something I will consider when I get back to this. One thing that confuses me is, what percentage of my prism layer cells should have a y+ value of my target value? I don't see how it is possible to have every cell have a y+ value of 30, say. Because the prism layers need to shrink in many areas where surfaces get close together. Also, the velocity is just very low in some regions that causes y+ to go to values of like 0.01, even when I'm targeting y+=30.

I do keep the residuals in my mind when judging convergence. In fact, it's the very first thing I look for. And if they're oscillating a lot, even if my engineering parameters look reasonable, I'll try out something different with the mesh or my model to fix that behavior. It's just that I wasn't requiring all residuals to drop below a specified value so long as my engineering parameters look good.

Thanks for all of your advice. I will keep these things in mind when I'm meshing up more cases.


All times are GMT -4. The time now is 17:41.