entropy generation rate -VS- system pressure drop
I ran an internal flow simulation (steady state, SST k-omega, no heat transfer) and it's time for post-processing.
I'd like to identify the areas that create most losses and I thought of calculating the local entropy generation rate using mean velocity derivative components.
My problem is: I'm not quite sure if the turbulent behavior of the flow is captured when using this technique with a k-omega model.
If I compare these 2 quantities:
I would expect them to be the same. They don't seem to be in my case (not even close!). Is it due to the fact that I calculate the Entropy using averaged velocity components?
Could anyone clarify for me the link between total pressure drop and entropy generation rate? Any sugggestion for post processing internal flow?
Thanks in advance!
|All times are GMT -4. The time now is 03:46.|