By Brian Spalding of Concentration, Heat and Momentum, Ltd
Presentation at CODE Annual SEMINAR in Teraelahti, Finland, 3-4 October 2001
Click here for contents
This proved to be both computationally expensive and
The guessed-PDF practices therefore continued.
This proved to be both computationally expensive and intellectually demanding.
The guessed-PDF practices therefore continued.
next or back
next or back or
The advent of the gas-turbine and the development of rockets during the 1940s and 1950s, stimulated much research on combustion; and the simultaneous development of digital computers enabled quantitative models for laminar-flow phenomena to be created.
For example, one-dimensional flame propagation though pre-mixed gases become completely understood already in the 1950s [Spalding,1955]; and, once the appropriate chemical-kinetic and transport-property data had been gathered, numerical predictions fitted experimental data rather well.
However, experiments on turbulent pre-mixed flames showed effects for which there were no explanations. For example, Williams et al  showed that the speed of propagation of a baffle-stabilized flame, confined in a duct, decreased when the initial temperature was raised; and it was very little dependent on the chemical composition of the gases.
back or contents
This behaviour was so unlike that of laminar flames that a new hypothesis had to be devised for its explanation, namely the "Eddy-Break-Up Hypothesis" (EBU) [Spalding, 1971a].
In modern terms, EBU can be regarded as a "two-fluid" model; for it postulated:
Despite its simplicity, and its disregard of chemical-kinetic influences, EBU proved to be largely successful. It is still in widespread use.
next, back or
1.2 Turbulent diffusion flames; the fluctuations-transport model
Laminar diffusion flames, i.e. those in which the supplies of fuel and oxygen are provided by separate streams, were well understood in the early 1970s.
Attempts were made to fit turbulent diffusion flames into the same theoretical framework by supposing, as did Boussinesq  that the turbulence enlarged the effective diffusion coefficient of the gases; but these were not completely successful.
The reason was clearly shown by the experiments of Hawthorne et al , which revealed what they called "unmixedness".
This entailed that flames were visibly much longer than the effective-diffusion-coefficient approach could explain.
To fit the experimental data, it proved necessary once again to invent a new hypothesis, namely that the gas at any point consisted of intermingling fragments having greater and smaller fuel-air ratios than the local mean value.
Then the root-mean-square value of fuel-air ratio differences was computed from a "fluctuations-transport" equation of the type used in the then-popular hydrodynamic models of turbulence [Spalding, 1971b].
In its original form, this model, which is referred to as the FTM below, can be seen as being simultaneously:
for it was supposed that, at any point, the fuel-air ratio could have one or other of only two values.
Within each fluid, the gases were regarded as being in chemical equilibrium. Once again, therefore, the influence of finite chemical reaction rates could not be accounted for.
The fluctuations-transport equation is still in widespread use, albeit in conjunction with more elaborate guesses about the shape of the PDF.
next, back or contents
1.3 The eddy-dissipation concept (EDC)
Magnussen and Hjertager  proposed a model which, in some respects, bridged the gap between EBU and FTM, and allowed chemical-kinetic limitations to have an effect.
It was again a two-fluid model, in that the state of the gas at any location was supposed to jump between two conditions; but these were:
Moreover, necessarily, the volume fraction of fluid b was supposed to be much less than unity.
Further assumptions were made about the rates of heat and mass transfer between the two fluids, the details of which the present author will not presume to summarise.
For the purposes of the present lecture it suffices to emphasis that EDC, and its later variants, allow no more than two states of fluid to co-exist at the same location.
next, back or contents
1.4 The full two-fluid model(F2FM)
A more complete account was provided, much later [Spalding, 1983], of how finite chemical reaction rates could be accounted for. This was achieved by utilising the so-called IPSA procedure that had been developed for two-phase flows, such as steam and water [Spalding, 1980].
This model was applied to both steady and unsteady flames, as illustrated by :
Whereas the EBU and FTM models were adopted swiftly by CFD specialists, this was not the case with the full two-fluid model (F2FM).
Probably the reason was that F2FM introduced too many novelties at the same time; for example the two fluids were allowed to possess not only different fuel-air ratios and degrees of reactedness but also different velocity components.
Another was perhaps that not many specialists possessed the means, at that time, of solving more than one set of Navier-Stokes equations simultaneously.
Finally, EBU and FTM appeared to many to be "good enough" for practical purposes, a view which (strangely) can be encountered even today.
Nevertheless, phenomena could be predicted by the F2FM which are still outside the scope of all the popular turbulence models, for example "un-mixing".
next, back or contents
1.4 The four-fluid model (4FM)
In 1995, a more modest step was proposed for improving on EBU (and other 2-flid models: the number of fluids form two to four; and differences of velocity between them were not allowed [Spalding, 1995a].
This development enabled finite chemical reaction rates to be accounted for.
It was used successfully for simulating both steady and unsteady flames.
the Williams turbulent flame confined in a duct, and
an explosion in an off-shore oil platform [Freeman and Spalding,1997].
next, back or contents
1.5 The fourteen-fluid model (14FM)
Like EBU, 4FM handled pre-mixed gases only. When variations of both fuel-air ratio and reactedness were to be handled simultaneously, the minimum number of fluids needed to provide at least qualitative realism was 14.
This was used in order to simulate a
turbulent Bunsen-burner flame
[Spalding, 1995b] and so to compute:
the contours of concentration of individual fluids, such as this, and the PDFs at various locations such as this.
It should be noted that a two-dimensional PDF was involved in this model. The dimensions were:
next, back or contents
The four- and fourteen-fluid models were first steps on the road towards the multi-fluid model which was first systematically presented in a conference paper [Spalding, 1996b] in Canada.
The "multi" in the name implies that a turbulent mixture can be regarded as a "population" having an arbitrary number of "ethnic" components.
These concepts will be expanded upon below.
next, back or contents
The working concepts of a multi-fluid model are few and simple. They are as follows:-
MFM therefore departs from the practice, introduced by Kolmogorov , of solving equations for statistical properties of the turbulent fluid, such as k, the turbulence energy.
In these pictures, the left-hand half gives the PDF; the right-hand half is merely a reminder of the "inter-mingling fluid" concept.
Populations of fluids may be multi-dimensional. Examples of two-dimensional populations would be:-
for simulating the flow and combustion of turbulent gases in a combustion chamber.
A discretised two-dimensional PDF looks like this, or this.
Examples of three-dimensional populations would be:
It is important to recognise that the modeller can choose freely:-
These choices can be made with the aid of:
Example 3: how many fluids are needed for accuracy when predicting smoke generation
These choices may differ from place to place and from time to time. MFM allows the possibility of using "un-structured" and "adaptive" population grids.
It should also be understood that MFM models can be combined with enlarged-viscosity models.
Thus it is common to use the k-epsilon model for the hydrodynamics when the phenomena of greater interest involve chemical reaction or radiation.
This was done in the examples shown here:-
Choice (1): Mixture fraction as the only population-distinguishing attribute,
Most practical combustion devices are of the "diffusion-flame" type, in the sense that the fuel and the oxidant enter the combustion space at different locations, and mix within that space.
Since the local fuel-air ratio has such a profound effect upon the combustion process, it is therefore obvious that the mixture fraction (MIXF) should be a PDA.
This is the choice which was made for the above-described simulation of the smoke-generating combustor.
Also made there was the 'mixed-is-burned' assumption, signifying that the composition of each component of the population (apart from its smoke content) depends only in MIXF. There was therefore no need to consider discretisation in the reacted-fuel-proportion dimension.
In the absence of heat losses, the temperature of each component is similarly dependent on MIXF alone. It is therefore possible to associate a smoke-generation rate with each population component.
The total smoke concentration of the mixture can then be calculated by adding together the contributions of the individual fluids.
In MFM parlance, the smoke concentration is a CVA, i.e. a continuously-varying attribute.
If heat losses, for example by radiation to cold walls, can not be neglected, it is wise to treat the enthalpy also as a CVA.
The same is true of NOX, if that is to be computed.
Indeed, if the validity of the mixed-is-burned assumption is doubtful, the reacted-fuel proportion can also be treated as a CVA.
Choice (2): Reacted-fuel proportion as a second PDA
If such an exploration of the effect of finite-rate main-reaction chemistry demonstrated that strong departures from equilibrium were possible, it would be wise to investigate their interaction with the turbulence by using a two-dimensional population, with RFP (ie reacted-fuel proportion) as the second population-distinguishing attribute.
PDF's would then arise of the kind which have already been seen above.
Another, with less colour but more content is shown here.
In this picture, the right-hand half is being use to show some information about a CVA.
Evidently, the mixed-is-burned presumption would NOT be justified in this case. If it had been, the PDF would have appeared like this, with most of the material in the uppermost population elements..
Choice (3): Reacted-fuel proportion as the only PDA
There are, of course, some practical circumstances in which the fuel-air ratio is almost uniform, whereas the major difference between the gas fragments is their degree of reactedness.
Combustion in a gasoline-engine cylinder is of this kind.
The PDF can therefore again be one-dimensional, with reacted-fuel fraction as the PDA.
The following picture shows an example of such a PDF,
The shapes depend greatly of the ratios of the micro-mixing (CONMIX) and the chemical-reaction rate (CONREA) to the local flow rate, as the following further cases illustrate:
case 2, case 3, case 4, and case 5,
To attempt to guess such shapes correctly would appear to be a hopeless enterprise; and to base engineering designs on the guesses an unwise one.
It follows that:
It is true that it is a two-fluid model; but the values of the population-distinguishing attributes are not fixed, as in the case of EBU, but vary with position within the flame, in a manner determined by the solutions of the equations for the mean and RMS-deviation values of MIXF.
However, MFM can do anything that FTM can do, and more, as is illustrated by the following figure extracted from a report by S.V.Zhubrin,
The figure shows that agreement is obtained between FTM and MFM when seventeen fluids are used; and of course MFM computes the PDF which FTM has to guess.
As compared with F2FM, MFM in its present form does both more and less.
It does more in that it can handle many fluids, not just two; but it does less in that all its fluids share the same velocity component. It can not therefore, as F2FM can, simulate the differential acceleration of hotter and colder gases illustrated above.
This deficiency will be removed by work currently in progress; but not as F2FM did, by allowing each fluid to have its own set of Navier-Stokes equations; for that would be needlessly expensive.
Instead, each fluid will have, its own velocity differences from the mean; and these will be calculated, as continuously varying attributes, by allowing for only:
However, the first such implementation was made by Pope , who chose to adopt a Monte-Carlo method of solution; and prior to 1995, this was the only method which appears to have been employed by anyone.
The result has been that "pdf-transport" and "Monte-Carlo" have become so frequently associated that it seems best to treat "pdf-transport" and "multi-fluid" models as wholly distinct.
Because of the Monte Carlo method, the former appears to lack some conceptual and practical advantages which the "discretised-PDF" nature of MFM possesses.
However, given unlimited computer time, and care to employ precisely the same micro-mixing formulae, MFM and PDF-transport should produce the same answers.
Among the first were Lockwood and Naguib .
"Clipped-Gaussian" and "beta-function" presumptions have both had their adherents; and large amounts of computer time have been consumed in exploring the implications of one or the other.
Unfortunately, none of the presumptions appear to have better claims than others to be preferred on theoretical or experimental grounds; and indeed the validity of the fluctuations-transport equation itself is little more than than a matter of faith.
MFM, even in its present rather primitive state, has shown that PDF shapes can be widely various. For example, to click on the links in the following table extracted from the 1998 lecture will reveal the variety.
|Figure||CONMIX||CONREA||RB||ave. R||rms. R|
While the notion is not implausible, a body of theory and computation has been built upon it which, in the author's opinion, is disproportionate.
The MFM theory, conceptually, also recognises that there may be such regions; but it allows also for their non-appearance and for the influences of such non-dimensional quantities as Reynolds number and Peclet number based on laminar flame speed.
The relation of MFM to flamelet theory has been discussed at length in a lecture devoted to the subject [Spalding, 1998]
Flamelet theory has nothing to say about combustion in non-pre-mixed gases.
Finally, direct numerical simulation [Schumann, 1973] should be mentioned; not because DNS is a turbulence model but in order to lead to the following remark:
Whereas DNS has sometimes been used as a means of deriving the constants and functions of Kolmogorov-type models, such as k-epsilon, it could now perhaps be better be used for testing and augmenting the micro-mixing hypotheses of MFM.
Since all that is involved is the appropriate post-processing of the results of DNS computations, this should not be difficult to contrive.
It is here argued that MFM is ready for practical use now, for the following reasons:
As has been explained above, MFM is merely an obvious extension of EBU ideas.
Everyone knows that CFD predictions improve when the grid is refined in geometrical space-time: MFM is simply EBU with grid refinement in "population space".
But why should one guess when one can calculate? This is what MFM allows
Moreover, even though it is to be expected that the predictive accuracy of MFM will improve as a consequence of easily conducted research, it has already been shown that it predicts root-mean-square fluctuations just as well as the fluctuations-transport model which "PDF-presumers" must employ.
So why not use it, and get the PDFs as well?
Moreover, MFM can produce two-dimensional PDFs as easily as 1D ones; and these are certainly needed for combustion processes.
MFM can now be regarded as "the poor man's PDFTM"; and it is not only cheaper to use: it can do much more.
Among the turbulent-combustion applications for which MFM is suitable in its present state are:
It has become fashionable to appply CFD to the design of the large paddle-stirred chemical reactors which are used in chemical industry; and most commercial CFD codes possess some such capability.
However, the be-all and end-all of such reactors is to effect a controlled reaction; and designing for this requires the ability to predict the micro-mixing process.
Only MFM provides this at present.
Clicking here will lead to an example of such an application.
Oil- and LPG-spills have already been mentioned; but there are other environmental hazards to the simulation of which MFM can make a contribution. One example may suffice:
The assertion that MFM is ready for use now by no means implies that further research is not desirable. It is; and most desirable of all would be the experimental measurement of PDFs which would permit confirmation, or would lead to refinement, of the underlying physical hypotheses.
The latter, and the uncertainties attending them, have not been emphasised in the present lecture; but full accounts can be accessed by clicking:
an account of "coupling and splitting"; or
here, or here, or here, for an account of "the brief encounter".
Preferably such experiments would be carried out on simple and easily controlled flows such as:
and there now exist easy-to-use procedures for systematically adjusting constants to fit CFD data.
It is therefore to be hoped that the academic-research community will soon see the opportunities which the un-tilled field of MFM presents to them.
The following thought may provide sufficient stimulus:
Experimental and mathematical researchers will be very welcome; but even more so those imaginative scientists who can perceive which limitations of the current MFM are most disadvantageous, and then remove them.
However, momentum transfer in a "brief encounter" is more complex than heat and mass transfer; for two fluid fragments which collide "head-on" will scatter material into the lateral directions.
Research and thinking on this topic has only just begun.
Introducing the effect into MFM requires physical intuition expressed in mathematical terms.
Perhaps however the length scale should be a new PDA?
The argument presented in the foregoing lecture will now be summarised, as follows:
[ Note: This list contains not only papers directly referred to above, but also some which appear in other documents regarding MFM ]