The scientific study of self-organizing systems is relatively new, although questions about how organization arises have of course been raised since ancient times. The forms we identify around us are only a small sub-set of those theoretically possible. So why don't we see more variety ? To answer to such a question is the reason why we study self-organization.
Many natural systems show organization (e.g. galaxies, planets, chemical compounds, cells, organisms and societies). Traditional scientific fields attempt to explain these features by referencing the micro properties or laws applicable to their component parts, for example gravitation or chemical bonds. Yet we can also approach the subject in a very different way, looking instead for system properties applicable to all such collections of parts, regardless of size or nature. It is here that modern computers prove essential, allowing us to investigate the dynamic changes that occur over vast numbers of time steps and with a large numbers of initial options.
Studying nature requires timescales appropriate for the natural system, and this restricts our studies to identifiable qualities that are easily reproduced, precluding investigations involving the full range of possibilities that may be encountered. However, mathematics deals easily with generalised and abstract systems and produces theorems applicable to all possible members of a class of systems. By creating mathematical models, and running computer simulations, we are able to quickly explore large numbers of possible starting positions and to analyse the common features that result. Even small systems have almost infinite initial options, so even with the fastest computer currently available, we usually can only sample the possibility space. Yet this is often enough for us to discover interesting properties that can then be tested against real systems, thus generating new theories applicable to complex systems and their spontaneous organization.
The essence of self-organization is that system structure often appears without explicit pressure or involvement from outside the system. In other words, the constraints on form (i.e. organization) of interest to us are internal to the system, resulting from the interactions among the components and usually independent of the physical nature of those components. The organization can evolve in either time or space, maintain a stable form or show transient phenomena. General resource flows within self-organized systems are expected (dissipation), although not critical to the concept itself.
The field of self-organization seeks general rules about the growth and evolution of systemic structure, the forms it might take, and finally methods that predict the future organization that will result from changes made to the underlying components. The results are expected to be applicable to all other systems exhibiting similar network characteristics.
A system is a group of interacting parts functioning as a whole and distinguishable from its surroundings by recognizable boundaries. There are many varieties of systems, on the one hand the interactions between the parts may be fixed (e.g. an engine), at the other extreme the interactions may be unconstrained (e.g. a gas). The systems of most interest in our context are those in the middle, with a combination both of changing interactions and of fixed ones (e.g. a cell). The system function depends upon the nature and arrangement of the parts and usually changes if parts are added, removed or rearranged. The system has properties that are emergent, if they are not intrinsically found within any of the parts, and exist only at a higher level of description.
When a series of parts are connected into various configurations, the resultant system no longer solely exhibits the collective properties of the parts themselves. Instead any additional behaviour attributed to the system is an example of an emergent system property. A configuration can be physical, logical or statistical, all can show unexpected features that cannot be reduced to an additive property of the individual parts.
The appearance of a property or feature not previously observed as a functional characteristic of the system. Generally, higher level properties are regarded as emergent. A automobile is an emergent property of its interconnected parts. That property disappears if the parts are disassembled and just placed in a heap.
The arrangement of selected parts so as to promote a specific function. This restricts the behaviour of the system in such a way as to confine it to a smaller volume of its state space. The recognition of self-organizing systems can be problematical. New approaches are often necessary to find order in what was previously thought to be noise, e.g. in the recognition that a part of a system looks like the whole (self-similarity) or in the use of phase space diagrams.
This is the total number of behavioural combinations available to the system. When tossing a single coin, this would be just two states (either heads or tails). The number of possible states grows rapidly with complexity. If we take 100 coins, then the combinations can be arranged in over 1,000,000,000,000,000,000,000,000,000,000 different ways. We would view each coin as a separate parameter or dimension of the system, so one arrangement would be equivalent to specifying 100 binary digits (each one indicating a 1 for heads or 0 for tails for a specific coin). Generalizing, any system has one dimension of state space for each variable that can change. Mutation will change one or more variables and move the system a small distance in state space. State space is frequently called phase space, the two terms are interchangeable.
a) The evolution of a system into an organized form in the absence of external constraints.
b) A move from a large region of state space to a persistent smaller one, under the control of the system itself. This smaller region of state space is called an attractor.
c) The introduction of correlations (pattern) over time or space for previously independent variables operating under local rules.
Yes, any system that takes a form that is not imposed from outside (by walls, machines or forces) can be said to self-organize. The term is usually employed however in a more restricted sense by excluding physical laws (reductionist explanations), and suggesting that the properties that emerge are not explicable from a purely reductionist viewpoint.
A preferred position for the system, such that if the system is started from another state it will evolve until it arrives at the attractor, and will then stay there in the absence of other factors. An attractor can be a point (e.g. the centre of a bowl containing a ball), a regular path (e.g. a planetary orbit), a complex series of states (e.g. the metabolism of a cell) or an infinite sequence (called a strange attractor). All specify a restricted volume of state space (a compression). The larger area of state space that leads to an attractor is called its basin of attraction and comprises all the pre-images of the attractor state. The ratio of the volume of the basin to the volume of the attractor can be used as a measure of the degree of self-organisation present. This Self-Organization Factor (SOF) will vary from the total size of state space (for totally ordered systems - maximum compression) to 1 (for ergodic - zero compression)
If a system is iterated and moves from state x to state y, then state x is a pre-image of state y. In other words it is on the trajectory that leads into state y. A pre-image that itself has no pre-image is called a Garden of Eden state, and is the starting point for a trajectory. It is usual to exclude states on the attractor itself from the pre-image list, to avoid circularity, since these are all pre-images of each other.
Any system that moves to a fixed structure can be said to be drawn to an attractor. A complex system can have many attractors and these can alter with changes to the system interconnections (mutations) or parameters. Studying self-organization is equivalent to investigating the attractors of the system, their form and dynamics.
A point at which system properties change suddenly, e.g. where a matrix goes from non-percolating (disconnected) to percolating (connected) or vice versa. This is often regarded as a phase change.
The ability of a system to evolve in such a way as to approach a critical point and then maintain itself at that point. If we assume that a system can mutate, then that mutation may take it either towards a more static configuration or towards a more changeable one (a smaller or larger volume of state space, a new attractor). If a particular dynamic structure is optimum for the system, and the current configuration is too static, then the more changeable configuration will be more successful. If the system is currently too changeable then the more static mutation will be selected. Thus the system can adapt in both directions to converge on the optimum dynamic characteristics.
This is the name given to the critical point of the system, where a small change can either push the system into chaotic behaviour or lock the system into a fixed behaviour. It is regarded as a phase change. It is at this point where all the really interesting behaviour occurs in a 'complex' system, and it is where systems tend to gravitate give the chance to do so. Hence most ALife systems are assumed to operate within in this regime.
At this boundary a system has a correlation length (connection between distant parts) that just spans the entire system, with a power law distribution of shorter lengths. Transient perturbations (disturbances) can last for very long times (infinity in the limit) and/or cover the entire system, yet more frequently effects will be local or short lived - the system is dynamically unstable to some perturbations, yet stable to others.
A point at which the appearance of the system changes suddenly. In physical systems the change from solid to liquid is a good example. Non-physical systems can also exhibit phase changes, although this use of the term is more controversial. Generally we regard our system as existing in one of three phases. If the system exhibits a fixed behaviour then we regard it as being in the solid realm, if the behaviour is chaotic then we assign it to the gas realm. For systems on the 'Edge of Chaos' the properties match those seen in liquid systems, a potential for either solid or gaseous behaviour, or both.
Percolation is an arrangement of parts (usually visualised as a matrix) such that a property can arise that connects the opposite sides of the structure. This can be regarded as making a path in a disconnected matrix or making an obstruction in a fully connected one. The boundary at which the system goes from disconnected to connected is a sudden one, a step or phase change in the properties of the system. This is the same boundary that we arrive at in SOC.
If we plot the logarithm of the number of times a certain property value is found against the log of the value itself we get a graph. If the result is a straight line then we have a power law. Essentially what we are saying is that there is a distribution of results such that the larger the effect the less frequently it is seen. A good example is earthquake activity where many small quakes are seen but few large ones, the Richter scale is based upon such a law. A system subject to power law dynamics exhibits the same structure over all scales. This self-similarity or scale independent (fractal) behaviour is typical of self-organizing systems.
No, selection is a choice between competing options such that one arrangement is preferred over another with reference to some external criteria - this represents a choice between two stable systems in state space. In self-organization there is only one system which internally restricts the area of state space it occupies. In essence the system moves to an attractor that covers only a small area of state space, a dynamic pattern of expression that can persist even in the face of mutation and opposing selective forces. Alternative stable options are each self-organized attractors and selection may then choose between them based upon their emergent phenotypic properties.
Selection is a bias to move through state space in a particular direction, maximising some external fitness function - choosing between mutant neighbours. Self-organization drives the system to an internal attractor, we can call this an internal fitness function. The two concepts are complementary and can either mutually assist or oppose. In the context of self-organizing systems, the attractors are the only stable states the system has, selection pressure is a force on the system attempting to perturb it to a different attractor. It may take many mutations to cause a system to switch to a new attractor, since each simply moves the starting position across the basin of attraction. Only when a boundary between two basins is crossed will an attractor change occur, yet this shift could be highly significant, a metamorphosis in system properties.
In the world of possible systems (the state space for the system) two possibilities are neighbours if a change or mutation to one parameter can change the first system into the second or vice versa. Any two options can then be classified by a chain of possible mutations converting between them (via intermediate states). Note that there can be many ways of doing this, depending on the order the mutations take place. The process of moving from one possibility to another is called an adaptive walk.
A process by which a system changes from one state to another by gradual steps. The system 'walks' across the fitness landscape, each step is assumed to lead to an improvement in the performance of the system against some criteria (adaptation).
If we rate every option in state space by its achievement against some criteria then we can plot that rating as a fitness value on another dimension, a height that gives the appearance of a landscape. The result may be a single smooth hill (a correlated landscape), many smaller peaks (a rugged landscape) or something in between.
As few as two (in magnetic or gravitational attraction) can suffice, but generally we use the term to classify more complex phenomena than point attractors. The richness of possible behaviour increases rapidly with the number of interconnections and the level of feedback. For small systems we are able to analyse the state possibilities and discover the attractor structure. Larger systems however require a more statistical approach where we sample the system by simulation to discover the emergent properties.
A connection between the output of a system and its input, in other words a causality loop - effect is fed back to cause. This feedback can be negative (tending to stabilise the system - order) or positive (leading to instability - chaos). Feedback results in nonlinearities, constraints on the system behaviour leading to unpredictability.
In general terms, for self-organization to occur, the system must be neither too sparsely connected (so most units are independent) nor too richly connected (so that every unit affects every other). Most studies of Boolean Networks suggest that having about two connections for each unit leads to optimum organisational and adaptive properties. If more connections exist then the same effect can be obtained by using canalysing functions or other constraints on the interaction dynamics.
Taking a collection (N) of logic gates (AND, OR, NOT etc.) each with K inputs and interconnecting them gives us a Boolean Network. Depending upon the number of inputs (K) to each gate we can generate a collection of possible logic functions that could be used. By allocating these to the nodes (N) at random we have a Random Boolean Network and this can be used to investigate whether organization appears for different sets of parameters. Some possible logic functions are canalysing and it seems that this type of function is the most likely to generate self-organization. This arrangement is also referred to biologically as a NK model where N is seen as the number of genes (with 2 alleles each - the output states) and K denotes their inter-dependencies.
A function is canalysing if a single input being in a fixed state is sufficient to force the output to a fixed state, regardless of the state of any other input. For example, for an AND gate if one input is held low then the output is forced low, so this function is canalysing. An XOR gate, in contrast, is not since the state can always change by varying another input. The result of connecting a series of canalysing functions can be to force chunks of the network to a fixed state (an initial fixed input can ripple through and lock up part of the network - a forcing structure). Such fixed divisions (barriers to change) can break up the network into active and passive structures and this can allow complex modular behaviours to develop. Because the structure is canalysing, a single change can switch the structure from passive to active or back again, this allows the network to perform a series of regulatory functions.
In general the higher the connectivity the more rugged the landscape becomes. Simply connected landscapes have a single peak, a change to one parameter has little effect on the others so a smooth change in fitness is found during adaptive walks. High connectivity means that variables interact and we have to settle for compromise fitness's, many lower peaks are found and the system can become stuck at local optima or attractors, rather than being able to reach the global optimum.
If we allow each node (N) to be itself a complex arrangement of interlinked parts (K) then we can regard the connections between nodes (C) as a further layer of control. This relates biologically to a genome interacting with other genomes. K is the gene interactions within the organism, C the genes outside the organism that affect it. The overall fitness is derived from the combinations of the interacting gene fitnesses.
An extension of the NKC model to add multiple species. Each species is linked to S other species. This can best be seen by visualising an ecosystem, where the nodes are species (assumed genetically identical) each consisting of a collection of genes, and the interactions between the species form the ecosystem. Thus the local connection K specifies how the genes of one species interact with themselves and the distant connections (C x S ) how the genes interact with each of the other species. This model then allows co-evolutionary development and organization to be studied.
A collection of interacting entities often react in certain ways only, e.g. entity A may be able to affect B but not C. D may only affect E. For a sufficiently large collection of different entities a situation may arise where a complete network of interconnections can be established - the entities become part of one coupled system. This is called an autocatalytic set, after the ability of molecules to catalyse each other's formation in the chemical equivalent of this arrangement.
The smallest parts of a system produce their own emergent properties, these are the lowest 'system' features and form the next level of structure in the system. Those system components then in turn form the building blocks for the next higher level of organization, with different emergent properties, and this process can proceed to higher levels in turn. The various levels can all exhibit their own self-organization (e.g. cell chemistry, organs, societies) or may be manufactured (e.g. piston, engine, car). One measure of complexity is that a complex system comprises multiple levels of description, the more ways of looking at a system then the more complex it is, and more extensive is the description needed to specify it (algorithmic complexity).
Energy considerations are often regarded as an explanation for organization, it is said that minimising energy causes the organization. Yet there are often alternative arrangements that require the same energy. To account for the choice between these requires other factors. Organization still appears in computer simulations that do not use the concept of energy, although other criteria may exist. This system property suggests that we still have much to learn in this area, as to the effect of resource flows of various types on organizational behaviour.
In nonlinear studies we find much structure for very simple systems, as seen in the self-similar structure of fractals and the bifurcation structure seen in the logistic map. This form of system exhibits complex behaviour from simple rules. In contrast, for self-organizing systems we have complex assemblies generating simple emergent behaviour, so in essence the two concepts are complementary. For our collective systems, we can regard the solid state as equivalent to the predictable behaviour of a formula, the gaseous state as corresponding to the statistical or chaotic realm and the liquid state as being the bifurcation or fractal realm.
Systems that use energy flow to maintain their form are said to be dissipative systems, these would include atmospheric vortices, living systems and similar. The term can also be used more generally for systems that consume energy to keep going e.g. engines or stars. Such systems are generally open to their environment.
A phenomenon that results in a system splitting into two possible behaviours (with a small change in one parameter), further changes then cause further splits at regular intervals until finally the system enters a chaotic phase. This sequence from stability, through increasing complexity, to chaos has much in common with the observed behaviour of complex systems, reflecting changes in attractor structure with variations to parameters.
Several other terms are loosely used with regard to self-organizing systems, many in terms of human behaviour. Autopoiesis is self-reproduction, maintenance of form with time and flows, Extropy is growing organizational complexity. Homeostasis, Homeokinetics, Synergetics and Cybernetics (integrated control/feedback concepts) are other terms sometimes connected with SOS.
Since we are seeking general properties that apply to topologically equivalent systems, any physical system or model that provides those connections can be used. Much work has been done using Cellular Automata and Boolean Networks, with Alife, Genetic Algorithms, Neural Networks and similar techniques also widely used. In general we start with a set of rules specifying how the interconnections behave, the network is then randomly initiated and iterated (stepped) continually following the ruleset. The stable patterns obtained (if any) are noted and the sequence repeated. After many trials generalisations from the results can be attempted, with some statistical probability.
Some of these results are tentative, and subject to change as more research is
undertaken and these systems become better understood. Many of these results are expanded
and justified by Stuart Kauffman in his recent lecture notes, see:
The above results seem to indicate that such system properties can be ascribed to all manner of natural systems, from physical, chemical, biological, psychological to cultural. Much work is yet needed to determine to what extent these system properties relate to the actual features of real systems and how they vary with changes to the constraints. Power laws are common in natural systems and an underlying SOC cannot be ruled out as a possible cause of this situation.
Few software packages relate to self-organization as such, but many do show self-organized behaviour in the context of more specialised topics. These include cellular automata (Game of Life), neural networks (artificial learning in self-organizing maps), genetic algorithms (evolution), artificial life (agent behaviour), fractals (mathematical art) and physics (spin glasses). These can be found via the relevant newsgroup FAQs.
Some self-organization programs are available from these sites:
Santa Fe -
Jurgen Schmitz - ftp://ftp.Germany.EU.net/pub/research/ci/Alife/packages/boids/ - Boids for Windows, self-organising birds (Windows).
Rudy Rucker -
Many studies of complex systems assume that the systems self-organize into emergent states which are not predictable from the parts. Artificial Life, Evolutionary Computation (incl Genetic Algorithms), Cellular Automata and Neural Networks are the main fields directly associated with this idea, all of which fall under the general auspices of Complex Systems or Complexity Theory.
This FAQ has been compiled and is maintained by Chris Lucas of the CALResCo Group. Comments, suggestions, requests for additions and particularly criticisms and corrections are warmly welcomed. Please feel free to EMail me anytime at firstname.lastname@example.org or post relevant messages to the Usenet newsgroup comp.theory.self-org-sys for discussion.
Thanks are due to many people who have contributed to this FAQ either directly, by
discussion and questions, or by influential publications. Especially (in alphabetical
Per Bak, Jack Cohen, Kelle Cruz, Erik Francis, Tim Haug, Francis Heylighen, Josh Howlett, Stuart Kauffman, David Kirshbaum, Chris Langton, William Latham, Graeme McCaffery, Yuri Milov, Mike Monkowski, Gary Nelson, Joseph O'Connor, David O'Neal, Craig Reynolds, Zed Shaw, Clint Sprott, Ian Stewart, Stephen Wolfram, Andy Wuensche, Qi Zeng.
Particular thanks are due to
Usual get out clauses, I take no responsibility for any errors contained in the information presented here or any damages resulting from its use. The information is accurate however as far as I can tell.
This FAQ may be posted in any newsgroup, mail list or BBS as long as it remains intact and contains the following copyright notice. This document may not be used for financial gain or included in commercial products without the express permission of the author.