Buckingham π theorem

Edgar Buckingham circa 1886

In engineering, applied mathematics, and physics, the Buckingham π theorem is a key theorem in dimensional analysis. It is a formalisation of Rayleigh's method of dimensional analysis. Loosely, the theorem states that if there is a physically meaningful equation involving a certain number n of physical variables, then the original equation can be rewritten in terms of a set of p = n − k dimensionless parameters π1, π2, ..., πp constructed from the original variables, where k is the number of physical dimensions involved; it is obtained as the rank of a particular matrix.

The theorem provides a method for computing sets of dimensionless parameters from the given variables, or nondimensionalization, even if the form of the equation is still unknown.

The Buckingham π theorem indicates that validity of the laws of physics does not depend on a specific unit system. A statement of this theorem is that any physical law can be expressed as an identity involving only dimensionless combinations (ratios or products) of the variables linked by the law (for example, pressure and volume are linked by Boyle's law – they are inversely proportional). If the dimensionless combinations' values changed with the systems of units, then the equation would not be an identity, and the theorem would not hold.

History

[edit]

Although named for Edgar Buckingham, the π theorem was first proved by the French mathematician Joseph Bertrand in 1878.[1] Bertrand considered only special cases of problems from electrodynamics and heat conduction, but his article contains, in distinct terms, all the basic ideas of the modern proof of the theorem and clearly indicates the theorem's utility for modelling physical phenomena. The technique of using the theorem ("the method of dimensions") became widely known due to the works of Rayleigh. The first application of the π theorem in the general case[note 1] to the dependence of pressure drop in a pipe upon governing parameters probably dates back to 1892,[2] a heuristic proof with the use of series expansions, to 1894.[3]

Formal generalization of the π theorem for the case of arbitrarily many quantities was given first by A. Vaschy [fr] in 1892,[4][5] then in 1911—apparently independently—by both A. Federman[6] and D. Riabouchinsky,[7] and again in 1914 by Buckingham.[8] It was Buckingham's article that introduced the use of the symbol "" for the dimensionless variables (or parameters), and this is the source of the theorem's name.

Statement

[edit]

More formally, the number of dimensionless terms that can be formed is equal to the nullity of the dimensional matrix, and is the rank. For experimental purposes, different systems that share the same description in terms of these dimensionless numbers are equivalent.

In mathematical terms, if we have a physically meaningful equation such as where are any physical variables, and there is a maximal dimensionally independent subset of size ,[note 2] then the above equation can be restated as where are dimensionless parameters constructed from the by dimensionless equations — the so-called Pi groups — of the form where the exponents are rational numbers. (They can always be taken to be integers by redefining as being raised to a power that clears all denominators.) If there are fundamental units in play, then .

Significance

[edit]

The Buckingham π theorem provides a method for computing sets of dimensionless parameters from given variables, even if the form of the equation remains unknown. However, the choice of dimensionless parameters is not unique; Buckingham's theorem only provides a way of generating sets of dimensionless parameters and does not indicate the most "physically meaningful".

Two systems for which these parameters coincide are called similar (as with similar triangles, they differ only in scale); they are equivalent for the purposes of the equation, and the experimentalist who wants to determine the form of the equation can choose the most convenient one. Most importantly, Buckingham's theorem describes the relation between the number of variables and fundamental dimensions.

Proof

[edit]

For simplicity, it will be assumed that the space of fundamental and derived physical units forms a vector space over the real numbers, with the fundamental units as basis vectors, and with multiplication of physical units as the "vector addition" operation, and raising to powers as the "scalar multiplication" operation: represent a dimensional variable as the set of exponents needed for the fundamental units (with a power of zero if the particular fundamental unit is not present). For instance, the standard gravity has units of (length over time squared), so it is represented as the vector with respect to the basis of fundamental units (length, time). We could also require that exponents of the fundamental units be rational numbers and modify the proof accordingly, in which case the exponents in the pi groups can always be taken as rational numbers or even integers.

Rescaling units

[edit]

Suppose we have quantities , where the units of contain length raised to the power . If we originally measure length in meters but later switch to centimeters, then the numerical value of would be rescaled by a factor of . Any physically meaningful law should be invariant under an arbitrary rescaling of every fundamental unit; this is the fact that the pi theorem hinges on.

Formal proof

[edit]

Given a system of dimensional variables in fundamental (basis) dimensions, the dimensional matrix is the matrix whose rows correspond to the fundamental dimensions and whose columns are the dimensions of the variables: the th entry (where and ) is the power of the th fundamental dimension in the th variable. The matrix can be interpreted as taking in a combination of the variable quantities and giving out the dimensions of the combination in terms of the fundamental dimensions. So the (column) vector that results from the multiplication consists of the units of in terms of the fundamental independent (basis) units.[note 3]

If we rescale the th fundamental unit by a factor of , then gets rescaled by , where is the th entry of the dimensional matrix. In order to convert this into a linear algebra problem, we take logarithms (the base is irrelevant), yielding which is an action of on . We define a physical law to be an arbitrary function such that is a permissible set of values for the physical system when . We further require to be invariant under this action. Hence it descends to a function . All that remains is to exhibit an isomorphism between and , the (log) space of pi groups .

We construct an matrix whose columns are a basis for . It tells us how to embed into as the kernel of . That is, we have an exact sequence

Taking tranposes yields another exact sequence

The first isomorphism theorem produces the desired isomorphism, which sends the coset to . This corresponds to rewriting the tuple into the pi groups coming from the columns of .

The International System of Units defines seven base units, which are the ampere, kelvin, second, metre, kilogram, candela and mole. It is sometimes advantageous to introduce additional base units and techniques to refine the technique of dimensional analysis. (See orientational analysis and reference.[9])

Examples

[edit]

Speed

[edit]

This example is elementary but serves to demonstrate the procedure.

Suppose a car is driving at 100 km/h; how long does it take to go 200 km?

This question considers dimensioned variables: distance time and speed and we are seeking some law of the form Any two of these variables are dimensionally independent, but the three taken together are not. Thus there is dimensionless quantity.

The dimensional matrix is in which the rows correspond to the basis dimensions and and the columns to the considered dimensions where the latter stands for the speed dimension. The elements of the matrix correspond to the powers to which the respective dimensions are to be raised. For instance, the third column states that represented by the column vector is expressible in terms of the basis dimensions as since

For a dimensionless constant we are looking for vectors such that the matrix-vector product equals the zero vector In linear algebra, the set of vectors with this property is known as the kernel (or nullspace) of the dimensional matrix. In this particular case its kernel is one-dimensional. The dimensional matrix as written above is in reduced row echelon form, so one can read off a non-zero kernel vector to within a multiplicative constant:

If the dimensional matrix were not already reduced, one could perform Gauss–Jordan elimination on the dimensional matrix to more easily determine the kernel. It follows that the dimensionless constant, replacing the dimensions by the corresponding dimensioned variables, may be written:

Since the kernel is only defined to within a multiplicative constant, the above dimensionless constant raised to any arbitrary power yields another (equivalent) dimensionless constant.

Dimensional analysis has thus provided a general equation relating the three physical variables: or, letting denote a zero of function which can be written in the desired form (which recall was ) as

The actual relationship between the three variables is simply In other words, in this case has one physically relevant root, and it is unity. The fact that only a single value of will do and that it is equal to 1 is not revealed by the technique of dimensional analysis.

The simple pendulum

[edit]

We wish to determine the period of small oscillations in a simple pendulum. It will be assumed that it is a function of the length the mass and the acceleration due to gravity on the surface of the Earth which has dimensions of length divided by time squared. The model is of the form

(Note that it is written as a relation, not as a function: is not written here as a function of )

Period, mass, and length are dimensionally independent, but acceleration can be expressed in terms of time and length, which means the four variables taken together are not dimensionally independent. Thus we need only dimensionless parameter, denoted by and the model can be re-expressed as where is given by for some values of

The dimensions of the dimensional quantities are:

The dimensional matrix is:

(The rows correspond to the dimensions and and the columns to the dimensional variables For instance, the 4th column, states that the variable has dimensions of )

We are looking for a kernel vector such that the matrix product of on yields the zero vector The dimensional matrix as written above is in reduced row echelon form, so one can read off a kernel vector within a multiplicative constant:

Were it not already reduced, one could perform Gauss–Jordan elimination on the dimensional matrix to more easily determine the kernel. It follows that the dimensionless constant may be written: In fundamental terms: which is dimensionless. Since the kernel is only defined to within a multiplicative constant, if the above dimensionless constant is raised to any arbitrary power, it will yield another equivalent dimensionless constant.

In this example, three of the four dimensional quantities are fundamental units, so the last (which is ) must be a combination of the previous. Note that if (the coefficient of ) had been non-zero then there would be no way to cancel the value; therefore must be zero. Dimensional analysis has allowed us to conclude that the period of the pendulum is not a function of its mass (In the 3D space of powers of mass, time, and distance, we can say that the vector for mass is linearly independent from the vectors for the three other variables. Up to a scaling factor, is the only nontrivial way to construct a vector of a dimensionless parameter.)

The model can now be expressed as:

Then this implies that for some zero of the function If there is only one zero, call it then It requires more physical insight or an experiment to show that there is indeed only one zero and that the constant is in fact given by

For large oscillations of a pendulum, the analysis is complicated by an additional dimensionless parameter, the maximum swing angle. The above analysis is a good approximation as the angle approaches zero.

Electric power

[edit]

To demonstrate the application of the π theorem, consider the power consumption of a stirrer with a given shape. The power, P, in dimensions [M · L2/T3], is a function of the density, ρ [M/L3], and the viscosity of the fluid to be stirred, μ [M/(L · T)], as well as the size of the stirrer given by its diameter, D [L], and the angular speed of the stirrer, n [1/T]. Therefore, we have a total of n = 5 variables representing our example. Those n = 5 variables are built up from k = 3 independent dimensions, e.g., length: L (SI units: m), time: T (s), and mass: M (kg).

According to the π-theorem, the n = 5 variables can be reduced by the k = 3 dimensions to form p = nk = 5 − 3 = 2 independent dimensionless numbers. Usually, these quantities are chosen as , commonly named the Reynolds number which describes the fluid flow regime, and , the power number, which is the dimensionless description of the stirrer.

Note that the two dimensionless quantities are not unique and depend on which of the n = 5 variables are chosen as the k = 3 dimensionally independent basis variables, which, in this example, appear in both dimensionless quantities. The Reynolds number and power number fall from the above analysis if , n, and D are chosen to be the basis variables. If, instead, , n, and D are selected, the Reynolds number is recovered while the second dimensionless quantity becomes . We note that is the product of the Reynolds number and the power number.

Other examples

[edit]

An example of dimensional analysis can be found for the case of the mechanics of a thin, solid and parallel-sided rotating disc. There are five variables involved which reduce to two non-dimensional groups. The relationship between these can be determined by numerical experiment using, for example, the finite element method.[10]

The theorem has also been used in fields other than physics, for instance in sports science.[11]

See also

[edit]

References

[edit]

Notes

[edit]
  1. ^ When in applying the π–theorem there arises an arbitrary function of dimensionless numbers.
  2. ^ A dimensionally independent set of variables is one for which the only exponents yielding a dimensionless quantity are . This is precisely the notion of linear independence.
  3. ^ If these basis units are and if the units of for every , then so that, for instance, the units of in terms of these basis units are For a concrete example, suppose that the fundamental units are meters and seconds and that there are dimensional variables: By definition of vector addition and scalar multiplication of units, so that By definition, the dimensionless variables are those whose units are which are exactly the vectors in This can be verified by a direct computation: which is indeed dimensionless. Consequently, if some physical law states that are necessarily related by a (presumably unknown) equation of the form for some (unknown) function with (that is, the tuple is necessarily a zero of ), then there exists some (also unknown) function that depends on only variable, the dimensionless variable (or any non-zero rational power of where ), such that holds (if is used instead of then can be replaced with and once again holds). Thus in terms of the original variables, must hold (alternatively, if using for instance, then must hold). In other words, the Buckingham π theorem implies that so that if it happens to be the case that this has exactly one zero, call it then the equation will necessarily hold (the theorem does not give information about what the exact value of the constant will be, nor does it guarantee that has exactly one zero).

Citations

[edit]
  1. ^ Bertrand, J. (1878). "Sur l'homogénéité dans les formules de physique". Comptes Rendus. 86 (15): 916–920.
  2. ^ Rayleigh (1892). "On the question of the stability of the flow of liquids". Philosophical Magazine. 34 (206): 59–70. doi:10.1080/14786449208620167.
  3. ^ Strutt, John William (1896). The Theory of Sound. Vol. II (2nd ed.). Macmillan.
  4. ^ Quotes from Vaschy's article with his statement of the pi–theorem can be found in: Macagno, E. O. (1971). "Historico-critical review of dimensional analysis". Journal of the Franklin Institute. 292 (6): 391–402. doi:10.1016/0016-0032(71)90160-8.
  5. ^ De A. Martins, Roberto (1981). "The origin of dimensional analysis". Journal of the Franklin Institute. 311 (5): 331–337. doi:10.1016/0016-0032(81)90475-0.
  6. ^ Федерман, А. (1911). "О некоторых общих методах интегрирования уравнений с частными производными первого порядка". Известия Санкт-Петербургского политехнического института императора Петра Великого. Отдел техники, естествознания и математики. 16 (1): 97–155. (Federman A., On some general methods of integration of first-order partial differential equations, Proceedings of the Saint-Petersburg polytechnic institute. Section of technics, natural science, and mathematics)
  7. ^ Riabouchinsky, D. (1911). "Мéthode des variables de dimension zéro et son application en aérodynamique". L'Aérophile: 407–408.
  8. ^ Buckingham 1914.
  9. ^ Schlick, R.; Le Sergent, T. (2006). "Checking SCADE Models for Correct Usage of Physical Units". Computer Safety, Reliability, and Security. Lecture Notes in Computer Science. Vol. 4166. Berlin: Springer. pp. 358–371. doi:10.1007/11875567_27. ISBN 978-3-540-45762-6.
  10. ^ Ramsay, Angus. "Dimensional Analysis and Numerical Experiments for a Rotating Disc". Ramsay Maunder Associates. Retrieved 15 April 2017.
  11. ^ Blondeau, J. (2020). "The influence of field size, goal size and number of players on the average number of goals scored per game in variants of football and hockey: the Pi-theorem applied to team sports". Journal of Quantitative Analysis in Sports. 17 (2): 145–154. doi:10.1515/jqas-2020-0009. S2CID 224929098.

Bibliography

[edit]

Original sources

[edit]
[edit]