Encyclopaedia Index
MFM: the economical route to PDFs
by
Brian Spalding, CHAM Ltd, London, England
A lecture delivered at The Isaac Newton Institute, Cambridge, England,
on April 30 1999
Abstract and contents
It is argued that:

probabilitydensity functions (PDFs) are more useful than
statistical averages;

the collidingfluidfragments model of Reynolds and Prandtl,
when generalised, leads to the multifluid model (MFM) and thence to
PDFs;

the "eddybreakup model" of 1971 was a small step in the right direction;

the PDFtransport model of 1982
and the twofluid model of 1987
were larger steps in two divergent nearlyright directions;
now MFM can deliver the results sought by the former by using
the mathematical techniques of the latter;
 MFM differs from Monte Carlo PDF transport in several
respects: in particular, it allows "populationgrid refinement
and
adaptation".
 So, Kolmogorov's introduction of transport equations for
statistical averages was "a good idea at the time, but..."
 During the lecture, applications of MFM will be made to:
 the plane mixing layer,

Some nearwall flows.
 a stirred reactor and
 a gasturbine combustor.
whereafter possible future developments will be discussed,
 References are provided
1. Probabilitydensity functions are more useful than statistical
averages
1.1 What PDFs are
 PDFs, when discretized, can be thought of as "fluidpopulation
distributions", measuring what
proportion of the fluid present at a given location, averaged over
time, has a prescribed state.
 The state prescription could be:
 having temperature between 20 and 21 degrees, or
 having velocity between 0.15 and 0.16 m/s;
 or both.
 Onedimensional PDFs (discretized) look like
this or
this or
this.
 From such PDFs, if the property in question is the velocity, we can
deduce, if we wish, the average kinetic energy of the turbulent motion.
 A discretised twodimensional PDF looks like
this, or
this.
Knowing the heights of the ordinates in each box, if their locations
represent (say) two components of velocity, we can deduce the average
of their product, ie the shear stress; but the reverse calculation is
not possible.
1.2 Why we need PDFs
 We need to know the PDFs for many engineering purposes,
for example:
 when the variable of a onedimensional PDF is temperature, and
the radiativeheat emission is to be calculated
(proportional to temperature**4);
 when the two variables of a 2D population are the concentrations of
participants in a chemical reaction, and the volumetric average of
the reaction rate is required (proportional to the product of
the concentrations);
 when the two variables of a 2D population are the fuelair ratio and
the extent of reaction of a combusting mixture, and it is required
to calculate the production rate of oxides of nitrogen
(nonlinearly dependent on concentrations and temperature).
 Some analysts who are aware of the need to know the PDFs satisfy
their consciences by guessing their shapes (e.g.
betafunction, or "clipped gaussian"); but none have ever been able
to prove that their guesses are correct.
 PDFs calculated from a physically plausible hypothesis must be better
than any guess.
1.3 Why not get PDFs from "weighted averages"?
 It is possible, in principle, to reconstruct a curve from knowledge
of a sufficient number of suitably weighted averages.
 Therefore, if:
 equations existed which enabled a sufficient number of such averages
to be computed, and
 these equations could be solved with sufficient accuracy, and
 the equations had a sound physical basis,
then at least onedimensional PDFs could be obtained.
 But:
 actual PDF shapes vary so widely that the equations would have to
be very numerous, in order to express their essential features;
 the computational expense would therefore be horrendous;
 the physical basis of such equations as exist (eg for Reynolds
stresses) is insecure.
 This approach therefore appears to be impracticable.
2. The collidingfluidfragments model of Reynolds and Prandtl
 Osborne Reynolds in 1874 explained observations about friction and
heat transfer between fluid streams and solid walls by postulating
that fragments of fluid collided with the walls and were brought
thereby to kinetic and thermal equilibrium with them.
This is the conceptual basis of the socalled "Reynolds Analogy".
 Ludwig Prandtl in 1925 explained observations about shear stress
and heat transfer within fluids by postulating a similar
collision and equalization between fluid fragments emanating from
locations of divers average velocity and temperature.
This is the conceptual basis of the socalled "mixinglength
hypothesis".
 The conceptual basis of the multifluid model is also the
collision of fluid fragments; but, instead of fully merging into one
another, they enjoy only a brief encounter; and when they separate, they
leave offspring behind them which are intermediate in properties
between those of the parents.
 Click here
and here
for illustrations.
 The
following picture illustrates this.
 These encounters change the population distribution in a
calculable manner. Allowing for all possible encounters, ie evaluating
a "collision integral", enables the PDFs to be computed.
 What happens in a single brief
encounter is easy to calculate and display.
The diagram
shows contours of constant velocity difference on a distancetime
plot . Time is vertical and distance, normal to the contact surface
of the fluid fragments, is horizontal.
 The longer the duration of the encounter, the thicker becomes the
boundary layer between the two fragments, and therefore the greater
the amount of "offspring material".
 Arguments presented in a recent article
have shown that, at high Reynolds numbers, the encounters are
"brief", in the sense that the boundary layer is typically much smaller
then the size of the colliding fragment, at the time when the
intermingling is interrupted by the next collision.
 At low Reynolds numbers, on the other hand, the picture looks
different, as shown
here.
 It is of course possible to work out exactly the consequences of
many kinds of encounter, for example that between hot burned gas
and cold unburned combustible gas.
 at
high Reynolds number,
 and at low,
 It is thus possible to work out a complete "encounter theory", and
to predict how the collisions between fragments of all members of
the fluid population affect the development of the fluidpopulation
distributions, i.e. the PDFs.
 Fortunately, the brevity of the highReynoldsnumber encounters
entails that only moleculardiffusion interactions have to be taken into
account.
3. The "eddybreakup" model; a small step in the right direction
 Dopazo and O'Brien, already in 1974,
formulated differential equations of which the solution would be the
PDF field. However it was left to Pope to provide
the first numerical solutions in 1982.
Unfortunately (in the author's opinion), he chose to employ the Monte
Carlo method of solution, as have all his followers. The computational
expense of the method has proved to be a serious deterrent to its
widespread use.
 At around the same time, the author was exploring a different
avenue, namely the development of a twofluid model
similar to those which were then being employed for twophase flow.
He thought of it as being:
"what Prandtl would have done, to further
the collidingfragments model, if he had possessed the computational
tools."
This had some success, particularly in explaining and simulating the
phenomenon of
"unmixing". However, it
involved the solution of
two sets of NavierStokes equations; and it never "caught on".
 Both of these approaches possess merits: the former does at least
aim at the right target, namely PDF calculation; and the latter,
although its PDF is a crude "twospike" one, can simulate real
phenomena about which popular models such as kepsilon have nothing
to say.
 MFM can be regarded as a logical extension of the twofluid model;
alternatively it may be looked on as
PDFtransport without Monte Carlo but with the additional
merits to be described below.
It may therefore perhaps achieve greater popularity than either.
5. The MultiFluid Model (MFM), and how it differs from
Monte Carlo PDFtransport (MCPT)
 MFM focusses attention on discretized PDFs. It produces
"battlementshaped" histograms, whereas MCPT produces a cloud of
points, through which one may be able to draw a curve.
 The fineness of MFM discretization is chosen by the analyst, who may
test its adequacy by gridrefinement. Sometimes an extremely coarse
population grid will suffice, as for example, in this
reactor study.
 MFM does not need to have the same number of fluids at all points in
the domain of study. In a combustor simulation, a single fluid will
often suffice over a large proportion of the volume. An algorithm
can be devised for dynamically determining the number of fluids
needed to provide a given accuracy.
 Population grids can thus be "unstructured" and "selfadaptive",
exploiting experience gained by CFD experts with space and time
grids.
 There appear to be no economising counterparts to points
b, c and d in MCPT.
 Because the local mass fraction of each fluid is a calculated and
accessible variable, MFM allows "micromixing hypotheses" to be
investigated which are more sophisticated than any formulated by MCPT
practitioners.
 MFM distinguishes between (what the analyst chooses as)
populationdistinguishing attributes (PDAs) and
continuouslyvarying attributes (CVAs), for example, in a combustor
simulation:
 PDAs: (1) fuel/air ratio; (2) unburnedfuel mass fraction.
 CVAs; (1) temperature; (2) concentrations of chemical species; (3)
velocity components.
MCPT appears to enjoy no such freedom.
 MFM fits easily into conventional finitevolumetype solution
algorithms, whereas MCPT requires, in addition, the MonteCarlo
apparatus and methodology.
 MFM concepts can be described rather easily in words, whereas (it
appears) MCPT demands a daunting display of mathematical symbols.
 The computer expense associated with MFM is of the same order of
magnitude as that associated with the hydrodynamics in a typical CFD
application.
Kolmogorov's 1942 paper said, in effect:
"Although we really want to know much more (eg the PDFs),
perhaps we can get away with calculating a few statistical
quantities"
The turbulencemodelling world has followed him.
Kolmogorov chose the energy, k, and the "frequency",
k/epsilon, as his variables, as did
Wilcox much later.
Particularly since the late 1960's, many other choices have been made,
the most popular being k and epsilon;
but all modellers have shared Kolmogorov's hope: that "a few
statistical quantities" will suffice.
However, for reasons explained in section 1, they do not suffice,
and never will. PDFs are what we must have; and MFM
enables us to get them economically.
Extracts will now be presented from earlier lectures by the author.
This concerns the firstever simulation of a muchstudied turbulent
flow which does not employ one the "classical" turbulencemodel
approaches.
This concerns a large threedimensional transient flow simulation,
to which introduction of the multifluid model added little
compuational expense but much valuable insight.
In this case, the kepsilon turbulence model is used for the
hydrodynamical part of the calculation, showing that MFM easily
coexists with conventional models.
This recent lecture shows how the predicted smokegeneration rate in
a threedimensional steadyflow combustor differs considerable are
according to whether the concentration fluctuations are or are not
taken into account.
Also reported are computer times, and how they vary with the number
of fluids employed; and a populationgridindependence study
is also reported.
7.4 Future developments.
Clicking here leads to the final section of a 1998 lecture on
MFM.
This sets out what is, in essence, a multimanyear program
of research. This, it is argued, could
beneficially transform the capabilities of engineers and applied
scientists to simulate turbulentflow phenomena realistically.
However, it recognises that a formidable obstacle stands in the way
of such an enterprise, namely the strong psychological hold which
Kolmogorov's "bright idea" of 1942 still exerts.
Loosening that hold is one intent of the present lecture.
Will The Isaac Newton Institute assist?
Or must the world wait for
The Einstein Institute to take an interest in turbulence?
The End
 C Dopazo and EE O'Brien (1974)
 Acta Astronautica vol 1, p1239

AN Kolmogorov (1942)
 "Equations of motion of an incompressible
turbulent fluid"; Izv Akad Nauk SSSR Ser Phys VI No 12, p56
 SB Pope (1982)
 Combustion Science and Technology
vol 28, p131
 O Reynolds (1874)
 "On the extent and action of the heating
surface of steam boilers";
Proc. Manchester Lit Phil Soc, vol 8, 1874
 DB Spalding (1971)
 "Mixing and chemical reaction in confined
turbulent flames"; 13th International Symposium on Combustion,
pp 649657 The Combustion Institute
 DB Spalding (1987)
 "A turbulence model for buoyant and combusting
flows"; Int. J. for Num. Meth. in Engg., vol 24, pp 123

Spalding DB (1995a)
 "Models of turbulent combustion"
Proc. 2nd Colloquium on Process Simulation, pp 115
Helsinki University of Technology, Espoo, Finland
 DB Spalding (1999)

"Connexions between the MultiFluid and
Flamelet models of turbulent combustion";
www.cham.co.uk; shortcuts; MFM
 DC Wilcox (1993)
 "Turbulence modelling for CFD", DCW Industries,
La Canada, California