in the



a draft by David Jefferies 21st June 2002

revised 1st September 2002

revised 14th October 2002



Hierarchy and complexity in the physical sciences and in technology

David J Jefferies

1 Introduction.

The new science of complexity has fewer practitioners in the "hard sciences" than it does in the more diffuse disciplines. Physicists, and to a lesser extent, Chemists, Biochemists, Earth Scientists and Technologists still largely adopt the reductionist viewpoint, of regarding a system as composed of subsystems or parts, analysing these parts, and then combining them with due consideration of their measured or perceived hierarchy. Indeed, many would say that this is the only truly "Scientific" way to proceed.

Thus, an imposed hierarchy is all-pervasive in the approach that people have taken to these disciplines. Understand the structure and the detail, the argument goes, and the big picture must emerge, given sufficient quantity. Insofar as complexity enters the thought processes, it does so via the crude "measure of quantity" of detail. This is the reductionist approach. When hierarchy is attributed to a complex system, reductionism is not far away.

In many physical systems, on the other hand, the recognizable features of a complex system emerge when the individual parts are allowed to interact, and the interaction strength is gradually increased. These features are often completely absent in the individual component parts, and would not be predicted by rational reductionist thought based on the properties of the constituents alone. Here, there is a different kind of hierarchy - one that develops over time, sometimes unpredictably, as we observe in nature in many complex systems. Thus, in this view, the hierarchy is a dynamical product of the system, liable to change, and not just a structural framework within which the component parts reside and interact. It is also true that in many systems, the hierarchical structure changes discontinuously as the interaction strength increases, possibly more than once. Predicting these "phase transition" points provides a considerable challenge.

In "reducing" a complete complex system to a collection of interacting parts, there are many ways the division may be made, and the boundaries between the parts may be chosen. It is often found that when the divisions are made differently, the description of the whole, and the "hierarchy which emerges", produced by letting the parts interact, also changes. Thus it is important to recognize the physically significant boundaries in the complex system. This is much easier to do in some specific situations than in others. In the prediction of complexity, the practitioner does not (in general) want to arrive at boundary-dependent properties that differ depending on the imposed partition of the problem.

Also, one may identify complexity which is principally structural in origin, as well as complexity which results from dynamics. In the latter case, one often sees concomitant structural complexity emerging. Consider smoke rising from a wet fire, and breaking up into whorls and turbulence as it ascends. Generally, we may see dynamical complexity being produced as the direct product of large flows of energy in a system.

In all these cases, the human observer attempts to classify what (s)he sees as a hierarchy of cause and effect. We exist in a temporal world in which time flows forwards, and the whole idea of cause and effect is linked to our perception of the arrow of time. We also seek to separate the constant from the changing; we ascribe a "system" to what is constant about our surroundings, and interpret what is changing as the "dynamics of the system".

Hierarchy, in the complex sense, is not static and immutable. The distinction is between an imposed, engineered, or created hierarchy for the system, into which all the component parts are required to fit and "find their niche", and a vibrant, alive, dynamical hierarchy which is constantly evolving, and which has a more descriptive (and less controlling) structure. In this case it sometimes helps to think of a "variable structure system" or VSS, where the structure determines the dynamics, which in turn modifies the structure at some later time. Biological evolution fits neatly into this scheme.

The science of complexity provides us a framework to look at some problems in the physical sciences which may appear disparate, but contain features in common. Later on the requirements on a system for the complex-science viewpoint to be useful are set out.

Meanwhile, we give some separate examples which have been informed by such thinking, but for which there seem to be no sound reductionist arguments.

1.1 A few introductory examples

1.1.1 Solid state Physics.

Metallic copper. We ask the question, in what way does the metal copper differ from a single copper atom? Or put another way, if we can bring copper atoms into close proximity, at what number of atoms in the conglomerate does the collection turn into the yellow metallic solid, thermally and electrically conducting, that we recognize in everyday life? Do the properties emerge gradually with the number N of interacting atoms, or is there a sudden transition as N increases? Can we even persuade a complex of say 27 (3x3x3) copper atoms to stay attached to each other? Has it got something to do with the ratio of the number Ni of internal atoms (1 in our example) to the number Ns of surface atoms (26 in our example).

1.1.2 Thermodynamics of phase changes.

Air. From experiment, but probably not (it is suggested) from thought alone, it is known that as the temperature of air is reduced, gases liquefy at various points and the chemical composition of the mixture changes. If we start off from a detailed knowledge of the properties of the several molecules that go to make up air, can we hope to predict this process and find quantitative values for the temperatures at which the various liquefactions occur?

1.1.3 Traffic flow on computer networks

Collective behaviour of Internet traffic. It is now known from experimental observations, that fluctuations of Internet traffic often obey self-similar or fractal scaling laws. Various attempts have been made to provide an explanation for this phenomenon; but what is certain is that the network designers had no intention to provide such behaviour, neither did they expect it to happen, and they have had to go to some trouble to try to explain it by various reductionist arguments.

2 The complex view

Complexity is ubiquitous across the Physical sciences. Prerequisites for complex systems analysis to be of use seem to include the following

2.1 Prerequisites

1) Sub-systems of many similar parts, not necessarily identical

2) An interaction mechanism between the parts that may possibly vary

3) Time development

4) Feedback: for example, as an environment develops over time due to the interactions of the parts, it affects the behaviour of those parts and also the behaviour of their interactions.

5) Recursion. It may happen that the individual subsystems depend on previous versions of themselves, and may be modelled by a discrete iterating set of equations.

When these conditions are met, it is possible for emergent behaviour to occur, where the structure and the dynamics of the system develop in unexpected ways, with transitions happening similar to phase transitions in the science of Chemistry. By the term "emergent behaviour" it is normally understood that according to some "measure of complexity" or "order parameter", the system becomes more complex, or differently-ordered, as time passes.

3 A list of complex systems

Such complexity arises in, for example, the following systems in the Physical sciences.

(This list is illustrative, not exhaustive.)

3.1 Cosmology and astronomy. structure of the universe after big bang

3.1.1 Collective behaviour of mass particles in a gravitational field, for example the structure of a galaxy or of Saturn's rings.

3.1.2 Satellite orbit dynamics

3.2 Fluid flow and turbulence.

3.2.1 Fluidised beds for Chemical engineering.

3.2.2 Weather patterns.

3.3 Geological landscape formation.

3.4 Fire propagation in extended regular systems

3.5 Crystallisation

3.6 Magnetic and ferroelectric domains in solids

3.7 Mechanical motions of impacting systems

3.8 Chemical reaction kinetics.

3.9 Biochemical synthesis from cell culture.

3.10 Traffic flow.

3.11 Technological evolution in the marketplace.

4 Modelling

In most complex systems of interest, direct study by data acquisition is impracticable because of the large scale and multivariate nature of the systems. The alternative is to set up a simple model which represents what are believed to be the essential components of the system, and to let it evolve and compare the outcomes with limited data taken from the real system.

4.0 Agent-based models.

The essence of agent-based modelling is to construct a space with a number of classes each containing many identical agents, and these agents are equipped with rule sets that govern their interactions with each other, with the boundaries of the space, and with agents which belong to other classes. The whole system is then simulated on a fast computer. One often finds that the global behaviour of the system is critically dependent on the rule-set structures and on the information flow between agents. Thus, for example, in the study of the behaviour of human crowds in confined spaces, quite different behaviour is observed (rule-based) by the humans than would be the case if they were modelled by a fluid flow set of rules, appropriate to grains of sand.

4.1 The differences between complex system and the model.

4.1.1 Analogue modelling.

Electronic hardware provides a test bed of intermediate complexity, and of sufficient designable structure that it may be used to mimic some of the less tractable systems found in nature. It is also a complex system in its own right, and one would expect that any universal behaviour of complex systems in general, will be readily apparent in analogue electronic models.

Analogue circuits have been used to model the Lorenz equations, and non-linear differential equations such as Duffing's equation, Van der Pol's equation, and impacting systems. Such systems have chaotic solutions, but unlike many more interesting cases, their dimensionality is low. The reductionist viewpoint has made significant progress in considering low-dimensional-system chaotic motion.

However, in higher dimensional systems understanding is much less well-developed.

In systems with inbuilt thresholds, and medium dimensionality, it is possible to see that the chaotic motion usually seen consists in an endless succession of chaotic transients, the current one giving rise to the next one when the relevant threshold is invoked. Thus, knowing the structure of the strange attractors in such a system may not yield the qualitative behaviour prediction desired, let alone quantitative predictions of the evolution.

Complexity is built-in to many electronic systems. Analogue electronic systems provide an excellent method of testing out complex system evolution in small-scale and in medium-scale scenarios. Digital electronic systems are often used to model their analogue counterparts, which they do with lesser or greater success depending on the details of the system.

4.1.4 Digital modelling.

Digital systems form a class of discrete complex system; on them we can run evolutionary processes, including genetic algorithms on digital computers, and then close the loop and construct reconfigurable evolving hardware. They are also particularly well adapted to handling agent-based-model methods of simulating and describing certain classes of complex system. This method has proven to be successful and useful in a wide range of applications, and is currently (2002) very fashionable.

4.1.5 Computer simulation

Simulation on large-scale digital computers is frequently used as the best method of modelling large physical complex systems in order to arrive at quantitative predictors of the likely future behaviour. There is currently much discussion as to the validity of this procedure. One can criticise the digital computation on the grounds of being a finite and discrete approximation to a system which has continuous properties that may be fractal on all scales. On the other hand, some proponents of agent-based methods see them as a universal computational tool.

4.2 Noise and the limitations imposed by resolution

However, in the physical world, processes are limited by noise and by thermal fluctuations, and there are those people who argue that if the discretisation imposed by the digital computational model is less than the granularity of such fluctuations, then the digital model is as accurate as is needed and the granularity may be safely ignored.

4.3 Reliability of simulation results

Electronic analogue models may be used to cast some light on this conundrum; they may be observed to have qualitatively and quantitatively different behaviour, under certain conditions, than the digital computation which simulates them. For the modeller, the question of pressing concern is about when the digital simulation may be trusted. This seems to require a detailed case-by-case evaluation, and even some kind of Monte Carlo repetitive experimental investigation which can give some probabilistic indication of the soundness of the simulation.

An example of this kind of process being used to effect is the validation of long-range weather forecasts. For if a collection of predictions of future weather patterns, starting from an ensemble of slightly-varied initial conditions, all land up with the same gross features, then the forecast itself may be assumed to be insensitive to minor fluctuations in the assumptions and therefore be of greater soundness. Quantifying this reliability measure is very difficult.


copyright © David Jefferies 2001, 2002



First aid powerpoint presentations | What is CPR? | First Aid Forum | First aid for burns | First aid for - online first aid resource | Online first aid training