DPM turbulent dispersion
I'm modelling a ladle with a DPM injection to simulate Argon stirring from the bottom.
The problem is that the DPM particle concentration is slightly too dense in the central bottom regions of the plume, therefore overpredicting central plume velocities. These larger velocities cause larger velocity gradients and therefore larger turbulence generation (Gk).
It seems that this over predicted turbulence generation causes a sharper increase in epsilon than in k (probably because errors in Gk is augmented by a greater than unity constant in the epsilon equation).
This leads to another problem: the turbulent dispersion of DPM particles is calculated from a time scale 0.15k/e. Since epsilon is more overpredicted than k, this time scale decreases, causing a narowing of the plume. This narrow and more concentrated plume causes even greater over prediction of central plume velocity and the vicious cycle starts all over again.
I've tried to correct this by simply changing the epsilon generation constant (C1e) from 1.44 to 1.38. It seems to work well in the SKE model for predicting the correct plume diameter, but I'm not sure about the implications of simply changing the model constants. The RNG k-e model and the RKE model require much larger changes in their model constants, and that just feels wrong.
Any help would be much appreciated!
|All times are GMT -4. The time now is 09:40.|