4.1 The four main classes of turbulence model

4.2 Advantages and disadvantages of the four classes

4.3 Similarities and differences between MFM and previous models

4.4 Generalising the eddy-break-up model

Turbulence models, in the sense used here, are collections of concepts about the nature of turbulent fluids which may be expressed in mathematical form in such a way as to constitute a method of prediction.

Questions to be answered by such a method include:

for a given set of flow-defining conditions (inlet geometry, outlet location and shape, internal baffles and sources of heat and momentum, fluid density, viscosity and heat capacity, number of thermodynamic phases, etc),

- will the flow be turbulent at all?
- if so what will be the intensity of the turbulence?
- what will be the consequent exchange rates of mass, momentum and
energy between the fluid and the solids with which it is in
contact?
- what will be the rates of chemical reaction within the fluid?

Methods of answering such questions are numerous and varied in nature; but it appears useful to separate them into four main classes, namely as:

I single-fluid models,

II multi-fluid models,

III probabilistic models,

IV direct-numerical-simulation models.

These terms will now be explained.

Single-fluid models are those in which the fluid condition is characterised by average values of velocity, temperature, etc, at each location and time instant, and by statistical representations of the fluctuations about those averages.

This class contains most of the models which are described in the text-books and are in use today. They include, to use the nomenclature of Launder and Spalding (1972):

- zero-(differential)-equation models, such as those of Boussinesq
(1877) and Prandtl (1925);
- one-equation models, such as those of Prandtl (1945) and Bradshaw
(1967);
- two-equation models, such as those of Kolmogorov (1942), Harlow
and Nakayama (1968), Saffmann (1970), Spalding (1969), Wilcox
(1993), Yakhot and Orszag (1986) and many others;
- multi-equation models, such of those concerned with multiple levels of energy or scale (Elhadidy, 1980; Kim and Chen, 1989) or with further statistical quantities (Daly and Harlow, 1970; Spalding, 1971b; Naot, Shavit and Wolfshtein, 1974).

Multi-fluid models express the fluctuations by representing them as though there were many different fluids mingling within the same space, each with its own local and instantaneous velocities and temperatures.

The mingling of steam bubbles with water in a kettle is an extreme example; but the fluids are more usually of a single thermodynamic phase, as when tongues of flame rise above a garden bonfire.

The basic concept was already present in the writings of Reynolds (1874) and Prandtl (1925); for they both conceived of the relative motion of fluid fragments, which could be of significant size.

Scientists concerned with combustion (Shchelkin,1943; Wohlenberg, 1953; Howe and Shipman, 1965; Kuznetzov,1979) have found the concept much to their liking; for it is only by taking into account the fragmentariness of the burning gases that observed phnomena can be explained, even qualitatively.

The author would place in the two-fluid category:

- his own "eddy-break-up" model (Spalding, 1971a; Mason and
Spalding,1973),
- the "eddy-dissipation concept" of Magnussen and Hjertager (1976),
which was derived from it, and
- the Bray-Libby-Moss "flamelet" model (Bray and Libby,1981; Moss, 1980).

In these works, the two fluids were distinguished in a variety of ways, eg turbulent/non-turbulent, hotter/colder, upward/downward- moving; and two sets of Navier-Stokes equations were solved simultaneously.

Still more recently, as is discussed elsewhere in this paper, the multi-fluid idea has been extended to 4-, 14-, and many-fluid models (Spalding, 1995a,b,c). However the first publications on the population-of-fluids concept may have been those arising from study of the long-forgotten "ESCIMO" model of turbulent combustion (Spalding, 1979; Noseir, 1980; Tam, 1981; Sun, 1982).

Probabilistic models represent the near-randomness of turbulence by introducing some randomness of their own. Specifically, they employ Monte-Carlo methods to establish the probable distribution of fluid attributes within a multi-dimensional space of which the coordinates include the components of velocity, the temperature, etc.

This class appears to have originated in the chemical-engineering- science field with the publications by Curl (1963) and Dopazo and O'Brien (1974).

More recently Pope (1980, 1982, 1985), Chen and Kollmann (1988, 1990) and others have conducted a vigorous research campaign, which, to judge from the proceedings of a recent conference (ICOMP, 1994), is gathering momentum.

Once again, it is the desire to be able to simulate combustion phenomena with quantitatuve realism which provides much of the motivation.

Finally, direct-numerical-simulation models revert to the single- fluid approach, but without any built-in averaging. Their users solve the Navier-Stokes equations with extremely fine sub-divisions of space and time, deriving averages and statistical measures of fluctuations only after the solutions have been obtained.

Direct-numerical simulation, has been subjected to increasing attention as computers have become more and more powerful. Pioneers include Schumann (1973) and Reynolds (1975).

There is a class of model featuring "probability-density functions" which the present author would place in the single-fluid category, because the shapes of the pdfs are presumed rather than computed. Models of this kind may perhaps have started with the author's own paper (Spalding,1971b), in which a "two-spike" double-delta-function presumption was made.

Subsequently, Lockwood and Naguib (1975) made the "clipped-Gaussian" presumption; they were followed by others, including Kent and Bilger (1976), Kolbe and Kollmann (1980), Rhodes et al (1974) and Gonzalez and Borghi (1991).

Employment of these presumptions gave rise to additional computational expense, because all reaction-rate terms (for example) required the evaluation of integrals for each cell at each iteration; but no clear advantage in respect of generality of agreement with experiment (in the present author's opinion) ever emerged.

As is shown elsewhere in this paper, the multi-fluid model makes it possible to calculate the shape of the pdf (or FPD, to accord with MFM nomenclature) if the number of fluids is very large; and this shape depends greatly upon the dimensionless parameters which relate the coupling rates of the fluids on the one hand, and their chemical reaction rates on the other, to the mass flow rate into the cell.

A detailed parametric study of these effects by way of the multi- fluid model might perhaps lead to formulae with the aid of which one might know what shape to presume in particular circumstances. Presumption of the shape without such guidance is however likely to do as much harm as good.

CLASS I must be regarded as the easiest to use, because many members of the class are embodied in widely-available software packages. For example, the Shareware (and therefore freely available) version of PHOENICS contains zero-, one-, and two-equation models; and, because of its open-source character, it allows other models of Class I to be built into it.

The latest versions of PHOENICS have more than a dozen identifiable Class-I turbulence models, and indeed many more if the possibility of sub-model interchanges betwen them are considered.

Most other general-purpose computer codes (eg FLUENT, FIDAP, FLOW-3D, CFX, TASC) now have a similarly wide range of models.

Ease of use is greatest for the 0-, 1-, and 2-equation models, and least for the Reynolds-stress models, for which convergence is not always guaranteed.

CLASS II models are less widely available in packages, although Shareware PHOENICS does possess a 2-fluid model which solves two sets of Navier-Stokes equations.

Multi-fluid models of this class are not yet widely available. Those which will be described below have been implemented by use of the open-source facilities of the latest PHOENICS; and it is probable that the implementation will be attached to the next or next-but-one release of the code.

There are no publicly available codes which embody the probabilistic models of CLASS III, but the US Government makes some available to its contractors. Although the methods may be simple for those who have become used to them, or to Monte-Carlo methods in other circumstances, getting started with Class-III methods is difficult.

Recently, however, a paper has been published which shows how the general-purpose code PHOENICS can be used as the basis for Monte- Carlo calculations of turbulent-mixing and -combustion processes (Fueyo, Larroya, Valino and Dopazo,1995)

Class IV methods would probably be regarded as the easiest to use, were it not for the fact that not even the largest computers in the world are large enough. On the surface, they appear to require nothing more than the ability to make a time-dependent non- turbulent flow simulation. But large-grid-size problems bring their own special difficulties, and means for resolving them; so this is also not an easy field of research to get into.

For the reasons just alluded to, Class-IV methods can be regarded as the easiest to understand, because the physical assumptions are those of laminar flow; and indeed there is no "modelling" (in the sense of substituting guesswork for ignorance) at all.

The present author would rank the other classes in the order I II III, in respect of understandability, with the qualification that some of the more complex single-fluid multi-equation models may be more difficult to grasp than the simpler two-fluid models.

It has to be admitted, however, that the two-fluid concept has not so far achieved much popularity; and this may in part result from reluctance to make the imaginative leap from steam/water mixtures on the one hand to hot-air/cold-air mixtures on the other.

Another misgiving has perhaps deterred more reflective persons, namely the thought: Why only TWO fluids? Perhaps the demonstration in the present paper that one can indeed handle any number of fluids will bring some reassurance.

The probabilistic models of Class III, are not, to this author's mind, easy to grasp at all. It is not so much the idea of multi- dimensional space that is difficult, but rather the esoteric mathematical language and symbols which, perhaps necessarily, the practitioners of these models employ.

To those who make the effort, Classes III and II can at first appear to be almost the same; so the next difficulty is that of understanding the differences. Of these, a crucial one is:-

- Class-II methods discretize some (but not all) fluid-attribute
dimensions, in the manner familiar to all users of finite-volume
or finite-element codes; and they can obtain useful results from
very coarse discretizations, as use of the two-fluid model has
shown.
- Class-III methods work in a non-discretized, multi-dimensional, all-fluid-attribute space; "coarsening" for economy appears to require reduction of the number of the attributes (eg species concentratons) which are considered.

Class I is by far the most extensively validated of the four; and although no widely-agreed answer can be given to the newcomer's reasonable question, "Which model is best?", advice can be given, based upon experimental knowledge, as to when it is permissible to use Prandtl's rather-simple mixing-length model, and when only a Reynolds-stress model will suffice.

However, it is just because of this extensive validation campaign that it is possible to assert that NONE of the Class-I models will suffice for the explanation of commonly-observed and practically- important phenomena. This has been suspected for many years; and the passage of time justifies the substitution of conviction for suspicion.

None of the other three models have been subjected to quantitative tests on the same scale. Probably the work on the two-fluid model by the author's former students at Imperial College represents the most systematic campaign; but this came to an end in 1988.

Comparison of Class-III predictions with experiments is being made in the USA for both simple and complex (eg gas-turbine-combustor) circumstances. The former comparisons appear to have drawn attention to a deficiency of all of the "mixing-model" assumptions which have been tried so far: they do not produce the right (Gaussian) probability-density-function shape in the late-time limit.

It is not clear whether this difficulty can be resolved while retaining the Monte-Carlo framework of calculation.

The brief review, in sections 4.1 and 4.2, of the whole turbulence- modelling scene, has been provided in order that the multi-fluid turbulence model can be understood in context.

MFM is not entirely new; but it has some significant points of novelty and advantage. The purpose of the present section is to bring these to the reader's attention, by pointing out similarities and differences.

The basic mathematical structure of MFM is similar to that of single-fluid models of turbulence, in that differential equations are formulated and solved in order to represent the influences of:

- time-dependence,
- convection,
- diffusion and
- sources and sinks.

Like single-fluid turbulence models, MFM has its empirical "constants", which may turn out to be functions of the local Reynolds number of turbulence and of other dimensionless quantities; but, so far, little effort has been expended on determining what they are.

MFM has many more equations to solve than single-fluid models do; so computer times are bound to be greater, if the same computational grid is used. However, MFM uses the available computer power in a different way; for it devotes as much attention to differences in fluid properties prevailing at a single location as it does to differences from place to place. Therefore MFM may use coarser geometrical grids than have become usual for single-fluid models.

MFM differs from the two-fluid model worked on by the author and his colleagues in the early 1980s in two main ways, neither of which are essential.

The second difference is that two complete sets of Navier-Skokes equations were solved for the two-fluid model, an achievement which may never be attainable for MFM, and has certainly not been attempted.________________________________ The first is that the | | states of the two fluids ^ | Fluid 2 | could roam freely over | | * | the "phase space" | | | defined by the fluid vert-| | attributes, whereas ical | | MFM restricts the attr-| | states of its fluids ibute| * | to discrete locations. | Fluid 1 | | | As soon as adaptive |______________________________| gridding is introduced horizontal attribute -----> however, MFM can enjoy the same freedom.

Nevertheless, the discrete fluids of MFM can indeed move relative to one another under the influences of body forces and fluid-to-fluid friction; and it appears that to allow each fluid its own Navier- Stokes equations would be needless indulgence. A "drift-flux" or "algebraic-slip" approximaton is likely to suffice.

It is worth noting that MFM has adopted one valuable feature of two- fluid thinking, namely the distinction between the fluid attributes which DISTINGUISH one fluid from the other and those which are merely POSSESSED by the fluid.

Thus, in the two-fluid model, the vertical velocity may be that which distinguishes the fluids; but each fluid still has its own distinct values of all three velocity components.

The difference between the models that IS essential, of course, is that between two and many; inevitably, MFM is the more capable of describing real flows.

MFM focusses attention on the probability-density functions of the fluid attributes for some of the same reasons as do users of the presumed-pdf approach, who nowadays form the majority of those who attempt to simulate turbulent combustion processes; for it is only knowledge of those functions which enables chemical-kinetic information to be rationally introduced.

However, MFM calculates the pdfs (ie its fluid-population distributions) and shows that the functions may take very varied shapes; whereas the basis of selection used by the shape-presumers is far from reliable.

Moreover, MFM finds it as easy to calculate two-dimensional pdfs as one-dimensional ones; presumed pdfs, by contrast, are perhaps always one-dimensional.

Of course, there can be no certainty, in advance of systematic comparison with experimental data, that the pdfs calculated by MFM in its present form are correct; and there is good reason to believe that the promiscuous-Mendelian coupling-splitting hypothesis is not always the most realistic (see section 6.3).

However MFM is at the beginning of its research life; whereas the presumed-pdf approaches may be near the end of theirs. It appears certain that MFM has the greater potential for development and improvement.

Here the abbreviation CPDFM (for Computed PDF model) is used in place of the longer "probabilistic model".

There is certainly a more than superficial relation between the PDF of the CPDFMs and the FPD of the MFM. In the limit of an infinite number of fluids (for the latter) and of particles (for the former), they may become quantitatively identical.

However, they ARE somewhat different concepts; they are computed by different mathematical techniques; they embody different physical hypotheses; and (possibly) they appeal differently to persons of differing educational backgrounds.

In respect of mathematics, CPDFMs employ Monte-Carlo methods, whereas MFMs use direct numerical solution. This means that CPDFMs focus on particles, whereas MFMs focus on cells. A consequence may prove to be that, for problems for which they can be reasonably compared, CPDFMs will prove to use more computer time, and MFMs to use more storage.

In respect of physics, CPDFMs employ something akin to "coupling", but not, it appears to "splitting". Nothing like the Mendelian concept appears so far to have been introduced; and perhaps indeed to do so would be impossible within the Monte-Carlo framework.

The 'mixing models' of the PDFMs are recognised, even by their strongest proponents, as being their major source of weakness; so it may be the constraints imposed by the mathematical technique which cause "relaxation to the local mean" to be used as the mixing model when a two-dimensional bluff-body flow is simulated (Correa and Pope, 1992), despite its known inadequacies.

There are also practical differences: MFMs can provide meaningful, and perhaps adequate, results with rather coarse population grids, eg no more than 20 fluids; but it is not clear that a corresponding saving is possible with CPDFMs.

Moreover, MFMs have always possessd, and need to retain, the notion that it is only few of the fluid properties which need to be discretised; whereas CPDFMs appear to adopt an all-or-nothing principle.

Finally, to bring to an end a comparison which deserves much more study and space, papers describing CPDFMs are written in language of a highly mathematical character, whereas descriptions of MFMs use more words and fewer symbols. Some users of turbulence models may base their choice between MFM and CPDFM on which is described in the language which is more congenial to them.

In summary, there appear to be sufficient differences between CPDFMs and MFMs, and (in the author's view) sufficient advantages on the side of the MFMs, to render the latter well worthy of the attention of those researchers who aim to provide engineers with the best predictive tools for turbulent combustion.

The eddy-break-up model (Spalding, 1971a) has had a surprisingly (in view of its crudeness) long-lasting influence: it inspired the invention of the eddy-dissipation concept (Magnussen,1976); it influenced the development of Pope's probabilistic model (1982); and it can usually be found to underly many models of the presumed-pdf kind.

It is now possible also to discern that the multi-fluid model which is described in the present paper is merely the long-delayed extension of the ideas of the original paper.

All persons with experience in computational fluid dynamics know that, in order to obtain improved accuracy, it is necessary to "refine the grid" sufficiently. MFM, it can be said, is merely a grid-refined eddy-break-up model.

In order to make clear the steps which led from EBU to MFM, the next section summarises part of the relevant publication (Spalding, 1995a).

The central idea of the EBU was that, where the time-average temperature of the gas was above that of the fully-unburned mixture, TU, and below that of the fully-burned mixture, TB, this was the consequence of its being made up of colder fragments inter-mingled with hotter fragments.

Moreover the cold fragments were as cold as they could be, namely of temperature TU; and the hot fragments were as hot as they could be, namely at temperature TB.

The mass fractions of cold and hot gas, MU and MB, were given by:

MU = 1 - MB

= (TB - TU) / (TB - TU).

Of course, combustion could not take place in the cold fragments; because they were too cold: nor in the hot fragments, because they were too hot.

Since combustion undoubtedly does take place, it was supposed that it did so at the inter-faces between the two types of fragments; but these were supposed to occupy only a small proportion of the volume.

The suppositions were then made that:-

- the rate of combustion per unit volume
was proportional to
the rate of intermingling of the two types of fragments; and

- this rate was proportional to:
MB * MU * MIXRATE and

- the magnitude of the variable MIXRATE
was proportional to either:
VELOCITY_GRADIENT * MIXING_LENGTH in the first model (Spalding, 1971b)

or to:

TURBULENCE_ENERGY_DISSIPATION_RATE / TURBULENCE_ENERGY in a later version (Mason and Spalding,1973).

The model was successful in explaining certain otherwise inexplicable experimental findings, for example the fact that the angle subtended by the flame anchored in a plane-walled channel was nearly independent of approach-gas velocity.

However, it had no means for expressing the influence of chemical kinetics. Yet such an influence does exist, as witness the fact that, when the approach-gas velocity becomes very large, flame propagation abruptly ceases.

This latter shortcoming was addressed by the author already in the original reference, and later by Magnusson and Hjertager (1976), in rather similar ways; but, at least in the present author's opinions, nothing truly satisfactory emerged.

Nearly twenty-five years elapsed before (what now seems) an obvious next step was taken, namely to increase the realism of the model by increasing the number of fluids, ie, in the terms of the present paper, to refine the population grid.

The central idea was to suppose that the turbulent reacting mixture consisted of fragments of more than two kinds. Four being the minimum number which, it then appeared, would explain all the qualitatively observed facts, it was the number first used.

Specifically, the mixture was supposed to consist of four fragment classes, namely:

A, comprising fully-unburned gas;

B, comprising a mixture of unburned gas and combustion products of

too low a temperature for the chemical reaction rate to proceed

with significant speed;

C, comprising a mixture of unburned gas and combustion products of

higher temperature, at which the chemical reaction rate is

significant; and

D, comprising fully-burned gas.

By analogy with the formulation successfully used for the EBU, it is postulated for the 4-fluid model that, per unit mass of mixture:

- fluid B is produced from fluids A and D at the rate:
0.5 * MA * MD * MIXRATE
- fluid B is produced from fluids A and C at the rate:
MA * MC * MIXRATE
- fluid C is produced from fluids A and D at the rate:
0.5 * MA * MD * MIXRATE
- fluid C is produced from fluids B and D at the rate: MB * MD * MIXRATE

- fluid D is produced from fluid C (only) at the rate: MC * CHEMRATE

The first application of the 4-fluid model was to the same problem as that for which the EBU was invented, namely that of steady turbulent flame spread in a plane-walled duct, through the upstream end of which flows both a stream of unburned combustible mixture, and a separate but thinner stream of fully-burned products to serve as an igniter, shown below. ------------------------------------------------------------------------

Steady turbulent flame spread in a plane-walled duct

Entrance ##################### Channel wall ############## Exit unburned gas-> :::::::::::: -> " ---> :::::::::: --> " ---> ::::::::: flame spreading to wall ---> " --->:::::: ----> burned gas -->:: - --- - --- - --Symmetry plane - --- - --- ----->

------------------------------------------------------------------------

The first remarkable feature of the experimental results is that the rate of spread of the flame, as measured by its angle for a fixed inlet-stream velocity, is very little dependent on the fuel- air ratio of the incoming mixture; until, that is to say, this ratio becomes too rich or weak to sustain combustion at all.

The second remarkable feature is that the angle of the flame depends very little on the inlet-stream velocity. This implies that the rate of combustion somehow keeps pace with an increased supply of reactants.

This process was simulated by activating the 4-fluid model within the PHOENICS computer program operating in "parabolic" (ie marching- integration) mode.

The magnitude of the MIXRATE quantity was computed in the same manner as for the eddy-break-up model, ie as proportional to the square root of the turbulence energy divided by the length scale; and

the quantity CHEMRATE was taken as equal respectively to

10000, 1000, 100 and 10 times

that which would suffice, if it were Fluid C which filled the duct, to consume all the fuel by the time that the duct exit was reached.

The following figures show, for the four different CHEMRATE values, the profiles of concentration of fluids A, B, C and D across the duct, at a distance half-way between the entrance and the exit.

------------------------------------------------------------------------

------------------------------------------------------------------------Profiles of A, B, C, and D for CHEMRATE = 10000 The symmetry plane is on the left, the wall on the rightVARIABLE D C B A MINVAL= 0.000E+00 0.000E+00 0.000E+00 0.000E+00 MAXVAL= 1.000E+00 1.000E+00 1.000E+00 1.000E+00 CELLAV= 3.992E-01 7.687E-04 1.871E-01 4.126E-01

1.00 +....+....+....+....+....+....+....+....+A.A.A.AA.A . A . 0.90 D DD + . D D A . 0.80 + DD + . D A . 0.70 + D A + . D . 0.60 + D A + . D . 0.50 + D A + . D . 0.40 + D B A + . BB B D AB B . 0.30 + B B A D B + . B B A B . 0.20 + B B A D B + . B B A D . 0.10 B BB B A A DD B + . A A A A D D B . 0.00 A.AA.A.A.AC.C.C+CC.C+CC.C+C.CC+C.CC+C.C.CB.B.B.BB.B 0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1.0 the abscissa is Y . min= 8.33E-04 max= 4.92E-02

------------------------------------------------------------------------

Profiles of A, B, C, and D for CHEMRATE = 1000 The symmetry plane is on the left, the wall on the right VARIABLE D C B A MINVAL= 0.000E+00 0.000E+00 0.000E+00 0.000E+00 MAXVAL= 1.000E+00 1.000E+00 1.000E+00 1.000E+00 CELLAV= 3.722E-01 7.029E-03 1.883E-01 4.322E-01 1.00 +....+....+....+....+....+....+....+....+A.A.A.AA.A . A . 0.90 D D A + . D D . 0.80 + D D A + . D A . 0.70 + D + . D A . 0.60 + D + . D A . 0.50 + D + . D A . 0.40 + D B B A + . B BB D A BB . 0.30 + BB D B + . B B A D B . 0.20 + B A D B + . B B B A A D B . 0.10 B BB AA D B + . AA A A D D B . 0.00 A.AA.A.A.CC.C.C+CC.C+CC.C+C.CC+C.CC+C.C.CB.B.B.BB.B 0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1.0 the abscissa is Y . min= 8.33E-04 max= 4.92E-02

------------------------------------------------------------------------

Profiles of A, B, C, and D for CHEMRATE = 100 The symmetry plane is on the left, the wall on the right VARIABLE D C B A MINVAL= 0.000E+00 0.000E+00 0.000E+00 0.000E+00 MAXVAL= 1.000E+00 1.000E+00 1.000E+00 1.000E+00 CELLAV= 2.234E-01 3.924E-02 1.919E-01 5.451E-01 1.00 +....+....+....+....+....+....+....+A.A.AA.A.A.AA.A . A . 0.90 + A + . A . 0.80 + + D D A . 0.70 + D A + . D . 0.60 + D A + . D . 0.50 + D A + . D A . 0.40 + B BB B BB + . B A B . 0.30 + BB D A B + . B A B . 0.20 + B B A D B + B B A DD B . 0.10 + C C C CA A C C D B + C CA A A A C C CC C C DD B . 0.00 A.A..+....+....+....+....+..CC+C.CC+B.B.BB.B.B.BB.B 0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1.0 the abscissa is Y . min= 8.33E-04 max= 4.92E-02

------------------------------------------------------------------------

Profiles of A, B, C, and D for CHEMRATE = 10 The symmetry plane is on the left, the wall on the right

VARIABLE D C B A MINVAL= 0.000E+00 0.000E+00 0.000E+00 0.000E+00 MAXVAL= 1.000E+00 1.000E+00 1.000E+00 1.000E+00 CELLAV= 8.441E-02 4.799E-02 8.105E-02 7.864E-01 1.00 +....+....+....+...A+AA.A+A.AA+A.AA+A.A.AA.A.A.AA.A . A . 0.90 + A + . . 0.80 + + . A . 0.70 D + . . 0.60 + D A + . . 0.50 + A + . D . 0.40 + + . D B AB . 0.30 + B + . B C A B . 0.20 C C C B + . B A C . 0.10 B A C B + . A C C B . 0.00 A....+....+...D+CC.B+BB.B+B.BB+B.BB+B.B.BB.B.B.BB.B 0 .1 .2 .3 .4 .5 .6 .7 .8 .9 1.0 the abscissa is Y . min= 8.33E-04 max= 4.92E-02

Despite the crudeness of the line-printer plots, the following observations can be made:

- In the first three figures, the differences between the
profiles of the unburned-gas (A) and the burned gas (D)
concentrations are rather small. Evidently the change of
CHEMRATE from 10000 down to 100 has had little effect of the
angle of the flame, in qualitative agreement with experiment.
- However, the average values of the concentration of the reactive
gas (C) (see CELLAV at the tops of the figures) are very small
only when CHEMRATE is high (7.87E-04 for 10000, 7.03E-03 for
1000). For CHEMRATE = 100, the CELLAV of C has risen to 3.92E-02,
signifying that this gas is not now so easily able to convert
into products (D) all that is created by micro-mixing
- When, for the fourth figure, CHEMRATE drops to 10, the profiles become radically different: the burned-gas (D) concentration becomes small everywhere, signifying that the incoming hot-gas stream has simply been mixed with the colder unburned-gas stream without promoting any significant further reaction; and the concentration of C rises as a result of this mixing, without any of the diminution which would result from such reaction.

The above example has been discussed at some length because it shows how "refinement of the composition grid" enables new insights to be gained and improved realism to be added to a numerical simulation. It also perhaps has some historic importance.

Later studies have shown that, as any CFD expert would expect, grid refinement from 2 to 4 is not enough for grid-independent results to be attained; but (depending on the presumption made for the chemical kinetics) as few as 10 fluids will give acceptable accuracy.

wbs