CFD Online Discussion Forums (https://www.cfd-online.com/Forums/)
-   CFX (https://www.cfd-online.com/Forums/cfx/)
-   -   Detonation, Unburnt Abs.Temperature and difference between BVM and ECFM model (https://www.cfd-online.com/Forums/cfx/213047-detonation-unburnt-abs-temperature-difference-between-bvm-ecfm-model.html)

 Viento December 12, 2018 22:58

Detonation, Unburnt Abs.Temperature and difference between BVM and ECFM model

Hello. I am working on a detonation wave simulation in premixed kerosene-air mixtures.
First of all, I studied publications like this (https://www.sciencedirect.com/scienc...40748916302802) and calculated ignition delay time τ(P,T) method which gave a great resemblance to theory. I used this correlation in BVM with autoignition model. It is important that as the reference temperature I chose not the temperature of the mixture, but the temperature of the unburned fractions. (Otherwise, any laminar burning immediately goes into detonation - it contains local temperatures over 2000K in flame front. But this is an average temperature. I suppose that reacted molecules heat the unreacted slowly.)

So, I got flat detonation wave with characteristics (velocity, pressure, temperature) very similar to known data. Results with ECFM and BVM were very similar.
First of all, I found out the following. Adiabatic compression to 30-50 atm heats the mixture from 300K to 800-900K. But τ(50atm,900K) = 1ms, however, the ignition time inside the detonation wave should be several microseconds. This implies much higher temperatures, 1500-1600K. Therefore, there must be some additional temperature, which is taken from incomplete combustion in a "thin supersonic flame" that exists over the autoignition wave.

Then I discovered two weird things:
1. Detonation with a BVM is significantly less stable than with a ECFM. A small expansion is enough for it to fade out, as if the critical diameter is a several meters, not 5 cm (known data). I do not know, why it is so. Perhaps, Unburnt Absolute Temperature variable is calculated incorrectly (In some simulations in expansion wave behind the reaction, it reaches weird values ​​like 1K.). Perhaps, the turbulent speed of a "thin supersonic flame" is calculated incorrectly. Perhaps a method itself to take Unburnt Absolute Temperature as a reference temperature is incorrect. With ECFM everything works well.
2. But with ECFM everything works too well! If you reproduce the structure of the detonation wave and completely turn off the auto-ignition model, the simulated detonation wave continues to exist and give exactly the same results. This is a bit confusing. (Perhaps I don't quite understand the logic of the ECFM, but can it itself predict ignition time?)

So, questions:
1. Which of these should I trust?
2. What is the actual "Unburnt Absolute Temperature" variable calculation formula for BVM and ECFM?

 ghorrocks December 13, 2018 04:16

I know little about combustion/explosion modelling so cannot help you there.

But I can give a little bit of general CFD advice which is that make sure you don't compare one model to another or come to conclusions about what is going on until you have validated your model. Do the basic checks that you are adequately converged, your mesh is fine enough and that your time steps are fine enough. You should check all these things with a sensitivity analysis.

Looking at the results of an under converged or poorly resolved simulation is meaningless, it can tell you anything.

 Viento December 13, 2018 23:28

1 Attachment(s)
The convergence is good enough, I think. Target 1e-4 is easily achieved.
I used the following principle of proportionality between the size of the grid and the timesteps: With the expected speed of the transition process, several timesteps are required to go through one grid element. It is quite simple in the case of the detonation of the air-kerosene mixture, since its approximate speed is known in advance.

 ghorrocks December 14, 2018 17:49

Quote:
 The convergence is good enough, I think.
You think? Does that mean you are guessing? If you have not done a sensitivity analysis to check then you are just guessing and you could be completely wrong. Do a sensitivity analysis to make sure your convergence is OK.

Your comment on time step implies you used a Courant number like parameter to set the time step. Are you aware that as an implicit solver, Courant number is of limited use in setting time step size? Again, a sensitivity analysis is the way to set the time step size for an implicit solver like CFX.

I apologise for being pedantic about these things, but looking at the results of an inadequately converged and poor tie resolution simulation is pointless as the results are rubbish. It is very common to find that unexpected results are just numerical errors from poor simulation setup.

 All times are GMT -4. The time now is 08:07.