**Introduction**

**Systems**

- 2.1 What is a system ?

- 2.2 What is a system property ?

- 2.3 What is emergence ?

- 2.4 What is organization ?

- 2.5 What is state or phase space ?

- 2.6 What is self-organization ?

- 2.7 Can things self-organize ?

- 2.8 What is an attractor ?

- 2.9 What is an pre-image ?

- 2.10 How do attractors and self-organization relate ?

- 2.1 What is a system ?
**Edge of Chaos**

**Selection**

**Interconnections**

- 5.1. How many parts are necessary for self-organization ?

- 5.2 What is feedback ?

- 5.3 What interconnections are necessary ?

- 5.4 What is a Boolean Network or NK model ?

- 5.5 What are canalysing functions and forcing structures ?

- 5.6 How does connectivity affect landscape shape ?

- 5.7 What is an NKC Network ?

- 5.8 What is an NKCS Network ?

- 5.9 What is an autocatalytic set ?

- 5.1. How many parts are necessary for self-organization ?
**Structure**

**Research**

**Resources**

**Miscellaneous**

The scientific study of self-organizing systems is relatively new, although questions about how organization arises have of course been raised since ancient times. The forms we identify around us are only a small sub-set of those theoretically possible. So why don't we see more variety ? To answer to such a question is the reason why we study self-organization.

Many natural systems show organization (e.g. galaxies, planets, chemical compounds, cells, organisms and societies). Traditional scientific fields attempt to explain these features by referencing the micro properties or laws applicable to their component parts, for example gravitation or chemical bonds. Yet we can also approach the subject in a very different way, looking instead for system properties applicable to all such collections of parts, regardless of size or nature. It is here that modern computers prove essential, allowing us to investigate the dynamic changes that occur over vast numbers of time steps and with a large numbers of initial options.

Studying nature requires timescales appropriate for the natural system, and this restricts our studies to identifiable qualities that are easily reproduced, precluding investigations involving the full range of possibilities that may be encountered. However, mathematics deals easily with generalised and abstract systems and produces theorems applicable to all possible members of a class of systems. By creating mathematical models, and running computer simulations, we are able to quickly explore large numbers of possible starting positions and to analyse the common features that result. Even small systems have almost infinite initial options, so even with the fastest computer currently available, we usually can only sample the possibility space. Yet this is often enough for us to discover interesting properties that can then be tested against real systems, thus generating new theories applicable to complex systems and their spontaneous organization.

The essence of self-organization is that system structure often appears without explicit pressure or involvement from outside the system. In other words, the constraints on form (i.e. organization) of interest to us are internal to the system, resulting from the interactions among the components and usually independent of the physical nature of those components. The organization can evolve in either time or space, maintain a stable form or show transient phenomena. General resource flows within self-organized systems are expected (dissipation), although not critical to the concept itself.

The field of self-organization seeks general rules about the growth and evolution of systemic structure, the forms it might take, and finally methods that predict the future organization that will result from changes made to the underlying components. The results are expected to be applicable to all other systems exhibiting similar network characteristics.

A system is a group of interacting parts functioning as a whole and distinguishable from its surroundings by recognizable boundaries. There are many varieties of systems, on the one hand the interactions between the parts may be fixed (e.g. an engine), at the other extreme the interactions may be unconstrained (e.g. a gas). The systems of most interest in our context are those in the middle, with a combination both of changing interactions and of fixed ones (e.g. a cell). The system function depends upon the nature and arrangement of the parts and usually changes if parts are added, removed or rearranged. The system has properties that are emergent, if they are not intrinsically found within any of the parts, and exist only at a higher level of description.

When a series of parts are connected into various configurations, the resultant system no longer solely exhibits the collective properties of the parts themselves. Instead any additional behaviour attributed to the system is an example of an emergent system property. A configuration can be physical, logical or statistical, all can show unexpected features that cannot be reduced to an additive property of the individual parts.

The appearance of a property or feature not previously observed as a functional characteristic of the system. Generally, higher level properties are regarded as emergent. A automobile is an emergent property of its interconnected parts. That property disappears if the parts are disassembled and just placed in a heap.

The arrangement of selected parts so as to promote a specific function. This restricts the behaviour of the system in such a way as to confine it to a smaller volume of its state space. The recognition of self-organizing systems can be problematical. New approaches are often necessary to find order in what was previously thought to be noise, e.g. in the recognition that a part of a system looks like the whole (self-similarity) or in the use of phase space diagrams.

This is the total number of behavioural combinations available to the system. When tossing a single coin, this would be just two states (either heads or tails). The number of possible states grows rapidly with complexity. If we take 100 coins, then the combinations can be arranged in over 1,000,000,000,000,000,000,000,000,000,000 different ways. We would view each coin as a separate parameter or dimension of the system, so one arrangement would be equivalent to specifying 100 binary digits (each one indicating a 1 for heads or 0 for tails for a specific coin). Generalizing, any system has one dimension of state space for each variable that can change. Mutation will change one or more variables and move the system a small distance in state space. State space is frequently called phase space, the two terms are interchangeable.

a) The evolution of a system into an organized form in the absence of external constraints.

b) A move from a large region of state space to a persistent smaller one, under the control of the system itself. This smaller region of state space is called an attractor.

c) The introduction of correlations (pattern) over time or space for previously independent variables operating under local rules.

Yes, any system that takes a form that is not imposed from outside (by walls, machines or forces) can be said to self-organize. The term is usually employed however in a more restricted sense by excluding physical laws (reductionist explanations), and suggesting that the properties that emerge are not explicable from a purely reductionist viewpoint.

A preferred position for the system, such that if the system is started from another state it will evolve until it arrives at the attractor, and will then stay there in the absence of other factors. An attractor can be a point (e.g. the centre of a bowl containing a ball), a regular path (e.g. a planetary orbit), a complex series of states (e.g. the metabolism of a cell) or an infinite sequence (called a strange attractor). All specify a restricted volume of state space (a compression). The larger area of state space that leads to an attractor is called its basin of attraction and comprises all the pre-images of the attractor state. The ratio of the volume of the basin to the volume of the attractor can be used as a measure of the degree of self-organisation present. This Self-Organization Factor (SOF) will vary from the total size of state space (for totally ordered systems - maximum compression) to 1 (for ergodic - zero compression)

If a system is iterated and moves from state x to state y, then state x is a pre-image of state y. In other words it is on the trajectory that leads into state y. A pre-image that itself has no pre-image is called a Garden of Eden state, and is the starting point for a trajectory. It is usual to exclude states on the attractor itself from the pre-image list, to avoid circularity, since these are all pre-images of each other.

Any system that moves to a fixed structure can be said to be drawn to an attractor. A complex system can have many attractors and these can alter with changes to the system interconnections (mutations) or parameters. Studying self-organization is equivalent to investigating the attractors of the system, their form and dynamics.

A point at which system properties change suddenly, e.g. where a matrix goes from non-percolating (disconnected) to percolating (connected) or vice versa. This is often regarded as a phase change.

The ability of a system to evolve in such a way as to approach a critical point and then maintain itself at that point. If we assume that a system can mutate, then that mutation may take it either towards a more static configuration or towards a more changeable one (a smaller or larger volume of state space, a new attractor). If a particular dynamic structure is optimum for the system, and the current configuration is too static, then the more changeable configuration will be more successful. If the system is currently too changeable then the more static mutation will be selected. Thus the system can adapt in both directions to converge on the optimum dynamic characteristics.

This is the name given to the critical point of the system, where a small change can either push the system into chaotic behaviour or lock the system into a fixed behaviour. It is regarded as a phase change. It is at this point where all the really interesting behaviour occurs in a 'complex' system, and it is where systems tend to gravitate give the chance to do so. Hence most ALife systems are assumed to operate within in this regime.

At this boundary a system has a correlation length (connection between distant parts) that just spans the entire system, with a power law distribution of shorter lengths. Transient perturbations (disturbances) can last for very long times (infinity in the limit) and/or cover the entire system, yet more frequently effects will be local or short lived - the system is dynamically unstable to some perturbations, yet stable to others.

A point at which the appearance of the system changes suddenly. In physical systems the change from solid to liquid is a good example. Non-physical systems can also exhibit phase changes, although this use of the term is more controversial. Generally we regard our system as existing in one of three phases. If the system exhibits a fixed behaviour then we regard it as being in the solid realm, if the behaviour is chaotic then we assign it to the gas realm. For systems on the 'Edge of Chaos' the properties match those seen in liquid systems, a potential for either solid or gaseous behaviour, or both.

Percolation is an arrangement of parts (usually visualised as a matrix) such that a property can arise that connects the opposite sides of the structure. This can be regarded as making a path in a disconnected matrix or making an obstruction in a fully connected one. The boundary at which the system goes from disconnected to connected is a sudden one, a step or phase change in the properties of the system. This is the same boundary that we arrive at in SOC.

If we plot the logarithm of the number of times a certain property value is found against the log of the value itself we get a graph. If the result is a straight line then we have a power law. Essentially what we are saying is that there is a distribution of results such that the larger the effect the less frequently it is seen. A good example is earthquake activity where many small quakes are seen but few large ones, the Richter scale is based upon such a law. A system subject to power law dynamics exhibits the same structure over all scales. This self-similarity or scale independent (fractal) behaviour is typical of self-organizing systems.

No, selection is a choice between competing options such that one arrangement is preferred over another with reference to some external criteria - this represents a choice between two stable systems in state space. In self-organization there is only one system which internally restricts the area of state space it occupies. In essence the system moves to an attractor that covers only a small area of state space, a dynamic pattern of expression that can persist even in the face of mutation and opposing selective forces. Alternative stable options are each self-organized attractors and selection may then choose between them based upon their emergent phenotypic properties.

Selection is a bias to move through state space in a particular direction, maximising some external fitness function - choosing between mutant neighbours. Self-organization drives the system to an internal attractor, we can call this an internal fitness function. The two concepts are complementary and can either mutually assist or oppose. In the context of self-organizing systems, the attractors are the only stable states the system has, selection pressure is a force on the system attempting to perturb it to a different attractor. It may take many mutations to cause a system to switch to a new attractor, since each simply moves the starting position across the basin of attraction. Only when a boundary between two basins is crossed will an attractor change occur, yet this shift could be highly significant, a metamorphosis in system properties.

In the world of possible systems (the state space for the system) two possibilities are neighbours if a change or mutation to one parameter can change the first system into the second or vice versa. Any two options can then be classified by a chain of possible mutations converting between them (via intermediate states). Note that there can be many ways of doing this, depending on the order the mutations take place. The process of moving from one possibility to another is called an adaptive walk.

A process by which a system changes from one state to another by gradual steps. The system 'walks' across the fitness landscape, each step is assumed to lead to an improvement in the performance of the system against some criteria (adaptation).

If we rate every option in state space by its achievement against some criteria then we can plot that rating as a fitness value on another dimension, a height that gives the appearance of a landscape. The result may be a single smooth hill (a correlated landscape), many smaller peaks (a rugged landscape) or something in between.

As few as two (in magnetic or gravitational attraction) can suffice, but generally we use the term to classify more complex phenomena than point attractors. The richness of possible behaviour increases rapidly with the number of interconnections and the level of feedback. For small systems we are able to analyse the state possibilities and discover the attractor structure. Larger systems however require a more statistical approach where we sample the system by simulation to discover the emergent properties.

A connection between the output of a system and its input, in other words a causality loop - effect is fed back to cause. This feedback can be negative (tending to stabilise the system - order) or positive (leading to instability - chaos). Feedback results in nonlinearities, constraints on the system behaviour leading to unpredictability.

In general terms, for self-organization to occur, the system must be neither too sparsely connected (so most units are independent) nor too richly connected (so that every unit affects every other). Most studies of Boolean Networks suggest that having about two connections for each unit leads to optimum organisational and adaptive properties. If more connections exist then the same effect can be obtained by using canalysing functions or other constraints on the interaction dynamics.

Taking a collection (N) of logic gates (AND, OR, NOT etc.) each with K inputs and interconnecting them gives us a Boolean Network. Depending upon the number of inputs (K) to each gate we can generate a collection of possible logic functions that could be used. By allocating these to the nodes (N) at random we have a Random Boolean Network and this can be used to investigate whether organization appears for different sets of parameters. Some possible logic functions are canalysing and it seems that this type of function is the most likely to generate self-organization. This arrangement is also referred to biologically as a NK model where N is seen as the number of genes (with 2 alleles each - the output states) and K denotes their inter-dependencies.

A function is canalysing if a single input being in a fixed state is sufficient to force the output to a fixed state, regardless of the state of any other input. For example, for an AND gate if one input is held low then the output is forced low, so this function is canalysing. An XOR gate, in contrast, is not since the state can always change by varying another input. The result of connecting a series of canalysing functions can be to force chunks of the network to a fixed state (an initial fixed input can ripple through and lock up part of the network - a forcing structure). Such fixed divisions (barriers to change) can break up the network into active and passive structures and this can allow complex modular behaviours to develop. Because the structure is canalysing, a single change can switch the structure from passive to active or back again, this allows the network to perform a series of regulatory functions.

In general the higher the connectivity the more rugged the landscape becomes. Simply connected landscapes have a single peak, a change to one parameter has little effect on the others so a smooth change in fitness is found during adaptive walks. High connectivity means that variables interact and we have to settle for compromise fitness's, many lower peaks are found and the system can become stuck at local optima or attractors, rather than being able to reach the global optimum.

If we allow each node (N) to be itself a complex arrangement of interlinked parts (K) then we can regard the connections between nodes (C) as a further layer of control. This relates biologically to a genome interacting with other genomes. K is the gene interactions within the organism, C the genes outside the organism that affect it. The overall fitness is derived from the combinations of the interacting gene fitnesses.

An extension of the NKC model to add multiple species. Each species is linked to S other species. This can best be seen by visualising an ecosystem, where the nodes are species (assumed genetically identical) each consisting of a collection of genes, and the interactions between the species form the ecosystem. Thus the local connection K specifies how the genes of one species interact with themselves and the distant connections (C x S ) how the genes interact with each of the other species. This model then allows co-evolutionary development and organization to be studied.

A collection of interacting entities often react in certain ways only, e.g. entity A may be able to affect B but not C. D may only affect E. For a sufficiently large collection of different entities a situation may arise where a complete network of interconnections can be established - the entities become part of one coupled system. This is called an autocatalytic set, after the ability of molecules to catalyse each other's formation in the chemical equivalent of this arrangement.

The smallest parts of a system produce their own emergent properties, these are the lowest 'system' features and form the next level of structure in the system. Those system components then in turn form the building blocks for the next higher level of organization, with different emergent properties, and this process can proceed to higher levels in turn. The various levels can all exhibit their own self-organization (e.g. cell chemistry, organs, societies) or may be manufactured (e.g. piston, engine, car). One measure of complexity is that a complex system comprises multiple levels of description, the more ways of looking at a system then the more complex it is, and more extensive is the description needed to specify it (algorithmic complexity).

Energy considerations are often regarded as an explanation for organization, it is said that minimising energy causes the organization. Yet there are often alternative arrangements that require the same energy. To account for the choice between these requires other factors. Organization still appears in computer simulations that do not use the concept of energy, although other criteria may exist. This system property suggests that we still have much to learn in this area, as to the effect of resource flows of various types on organizational behaviour.

In nonlinear studies we find much structure for very simple systems, as seen in the self-similar structure of fractals and the bifurcation structure seen in the logistic map. This form of system exhibits complex behaviour from simple rules. In contrast, for self-organizing systems we have complex assemblies generating simple emergent behaviour, so in essence the two concepts are complementary. For our collective systems, we can regard the solid state as equivalent to the predictable behaviour of a formula, the gaseous state as corresponding to the statistical or chaotic realm and the liquid state as being the bifurcation or fractal realm.

Systems that use energy flow to maintain their form are said to be dissipative systems, these would include atmospheric vortices, living systems and similar. The term can also be used more generally for systems that consume energy to keep going e.g. engines or stars. Such systems are generally open to their environment.

A phenomenon that results in a system splitting into two possible behaviours (with a small change in one parameter), further changes then cause further splits at regular intervals until finally the system enters a chaotic phase. This sequence from stability, through increasing complexity, to chaos has much in common with the observed behaviour of complex systems, reflecting changes in attractor structure with variations to parameters.

Several other terms are loosely used with regard to self-organizing systems, many in terms of human behaviour. Autopoiesis is self-reproduction, maintenance of form with time and flows, Extropy is growing organizational complexity. Homeostasis, Homeokinetics, Synergetics and Cybernetics (integrated control/feedback concepts) are other terms sometimes connected with SOS.

Since we are seeking general properties that apply to topologically equivalent systems, any physical system or model that provides those connections can be used. Much work has been done using Cellular Automata and Boolean Networks, with Alife, Genetic Algorithms, Neural Networks and similar techniques also widely used. In general we start with a set of rules specifying how the interconnections behave, the network is then randomly initiated and iterated (stepped) continually following the ruleset. The stable patterns obtained (if any) are noted and the sequence repeated. After many trials generalisations from the results can be attempted, with some statistical probability.

Some of these results are tentative, and subject to change as more research is
undertaken and these systems become better understood. Many of these results are expanded
and justified by Stuart Kauffman in his recent lecture notes, see:

- The attractors of a system are uniquely determined by the state transition properties of the nodes (their logic) and the actual system interconnections.
- Attractors result in the merging of historical positions. Thus irreversibility is inherent in the concept. Many scenarios can result in the same outcome, therefore a unique logical reduction that a state arose from a particular predecessor (backward causality) is impossible, even in theory. Merging of world lines in this way invalidates, in general, determination of the specific pre-image of any state.
- The ratio of the basin of attraction size to attractor size (called here SOF) varies from the size of the whole state space (totally ordered, point attractor) down to 1 (totally disordered, ergodic attractor).
- Single connectivity mutations can considerably alter the attractor structure of networks, allowing attractors to merge, split or change sequences. Basins of attraction are also altered and initial points may then flow to different attractors.
- Single state mutations can move a system from one attractor to another within the system. The resultant behaviour can change between fixed, chaotic, periodic and complex in any combination of the available attractors and the effect can be predicted if the system details are fully known.
- The mutation space of a system with 2 alleles at each node is a Boolean Hypercube of dimension N (number of neighbours). The number of adaptive peaks for random systems is 2 ** N /(N+1), exponentially high.
- The chance of reaching a random higher peak halves with each step, after 30 steps it is 1 in a Billion. The time required scales in the same way. Mean length of an adaptive walk to a nearby peak is ln N. Branching walks are common initially, but most end on local optima (dead ends). This makes finding a single 'maximum fitness' peak an NP-hard problem. Correlated landscapes are necessary for adaptive improvement.
- Correlation falls exponentially with mutant difference (Hamming distance), becoming fully uncorrelated for K=N-1 landscapes. Searches beyond the correlation length (1/e) sample random landscapes. Hence the number of recombination 'tries' needed to find a higher peak doubles with each sucess.
- For such systems with high connectivity, the median number of attractors is N/e (linear), the median number of states within an attractor averages 0.5 * root(2 ** N) (exponentially large). These systems are highly sensitive to disturbance, and swap amongst the attractors easily.
- For K=0, there is a smooth landscape with one peak (the global optimum). Length of an adaptive walk is N/2, directions uphill decreasing by one with each step.
- For K=1, median attractor numbers are exponential on N, state lengths increase only as root N, but again are sensitive to disturbance and easily swap between attractors.
- For K=2 we have a phase transition, median number of attractors drops to root N, average length is also root N. The system is stable to disturbance and has few paths between the attractors. Most perturbations return to the same attractor.
- Systems that are able to change their number of connections (by mutation) are found to move from the chaotic (K high) or static (K low) regions spontaneously to that of the phase transition and stability - the self-organizing criticality. The maximum fitness is found to peak at this point.
- Natural genetic systems with high connectivity K>2 have a higher proportion of canalysing functions than would be the case if randomly assigned. This suggests a selective bias towards functions that can support self-organization to the Edge of Chaos.
- To create a relatively smooth landscape requires redundancy, non-optimal systems. Maximal compression (efficiency) gives a rugged landscape, and stagnation on a local peak, preventing improvement. Above suggests that systems alter their redundancy to maximise adaptability.
- The 'No Free Lunch' Theorem states that, averaged over all possible landscapes, no search technique is better than random. This suggests, if the theory of evolution is valid, that the landscape is correlated with the search technique. In other words the organisms create their own smooth landscape - the landscape is 'designed' by the agents...
- If we measure the distance between two close points in phase space, and plot that with time, then for chaotic systems the distance will diverge, for static it will converge onto an attractor. The slope gives a measure of the system stability (+ve is chaotic) and a zero value corresponds to edge of chaos. This goes by the name of the Lyapunov exponent (one for each dimension). Other similar measures are also used (e.g. Derrida plot for discrete systems).
- A network tends to contain an uneven distribution of attractors. Some are large and drain large basins of attraction, other are small with few states in their corresponding basins.
- The basins of attraction of higher fitness peaks tend to be larger than those for lower optima at the critical point. Correlated landscapes occur, containing few peaks and with those clustered together.
- As K increases, the height of the accessible peaks falls, this is the 'Complexity Catastrophe' and limits the performance towards the mean in the limit.
- Mutation pressure grows with system size. Beyond a critical point (dependant upon rate, size and selection pressure) it is no longer possible to achieve adaptive improvement. A 'Selection or Error Catastrophe' sets in and the system inevitably moves down off the fitness peak to a stable lower point, a sub-optimal shell. Limit = 2 * mutation rate * N ** 2 / MOD(selection pressure)
- For co-evolutionary networks, tuning K (local interactions) to match or exceed C (species interactions) brings the system to the optimum fitness, another SOC. This tuning helps optimise both species (symbiotic effects). Reducing the number S of interacting species (breaking dependancies - e.g. new niches) also improves overall fitness. K should be minimised but needs to increase for large S and C to obtain rapid convergence.
- In the phase transition region the system is generally divided into various areas of variable behaviour separated by fixed barriers of static components. Pathways or tendrils between the dynamic regions allow controlled propagation of information across the system. The number of islands is low (less than root N) and comprises about a fifth of the nodes.
- At the critical point, any size of perturbation can potentially cause any size of effect - it is impossible to predict the size of the effect from the size of the perturbation (for large, analytically intractable systems). A power law distribution is found over time, but the timing and size of any particular perturbation is indeterminate.
- Plotting the input entropy of a system gives a high value for chaotic systems, a low value for ordered systems and an intermediate for complex system. Variance of the input entropy is high for complex systems but low for both ordered and chaotic ones. This can be used to identify EOC behaviour.
- For a network of N nodes and E possible edges, then as N grows the number of edge combinations will increase faster than the nodes. Given some probability of meaningful interactions, then there will inevitably be a critical size at which the system with go from subcritical to supracritical behaviour, a SOC or autocatalysis. The relevant size is N = Root ( 1 / ( 2 * probability) )
- Since a metabolism is such an autocatalytic set, this implies that life will emerge as a phase transition in any sufficiently complex reaction system - regardless of chemical or other form.
- Given a supracritical set of existing products M, and potential products M' (M' > M), equilibrium constant constraints predict that the probability of the difference M' - M set should be non-zero. Therefore there will be a gradient towards more diversity, in other words 'creativity' in any such system.
- Evaluating the above for the diversity we find on this planet shows that we have so far explored only an insignificant fraction of state space during the time the universe has existed. Thus the Universe in not yet in an equilibrium state and the standard assumptions of equilibrium statistical mechanics do not apply (e.g. the ergodic hypothesis).
- Given protein diversity in the biosphere this proves to be widely supracritical, yet stability of cells requires partitioning to a subcritical but autocatalytic state. This balance suggests a limit to cell biochemical diversity and a self-organizing maintenance below that limit. This is related to the Error Catastropy, too high a rate of innovation is not controllable by selection and leads to information loss, chaos and breakdown of the system.
- Two or more interacting autocatalytic sets that increase reproduction rates above that of either in isolation will grow preferentially. This is a form of trade or mutual assistance, an ecosystem in miniature.
- Such interacting sets can generate components that are not in either set. giving a higher level of joint operation, emergent novelty.
- If such innovation involves a cost, then the rate of innovation will be constrained by payback period. This is seen in economic analogues, where risk/profit forms a balance, as well as in ecological systems. Interactions must be net positive sum to be sustainable.

The above results seem to indicate that such system properties can be ascribed to all manner of natural systems, from physical, chemical, biological, psychological to cultural. Much work is yet needed to determine to what extent these system properties relate to the actual features of real systems and how they vary with changes to the constraints. Power laws are common in natural systems and an underlying SOC cannot be ruled out as a possible cause of this situation.

Few software packages relate to self-organization as such, but many do show self-organized behaviour in the context of more specialised topics. These include cellular automata (Game of Life), neural networks (artificial learning in self-organizing maps), genetic algorithms (evolution), artificial life (agent behaviour), fractals (mathematical art) and physics (spin glasses). These can be found via the relevant newsgroup FAQs.

Some self-organization programs are available from these sites:

CALResCo -

Santa Fe -

Jurgen Schmitz - ftp://ftp.Germany.EU.net/pub/research/ci/Alife/packages/boids/ - Boids for Windows, self-organising birds (Windows).

Rudy Rucker -

http://www.calresco.org - CALResCo, home of this FAQ, introductionshttp://algodones.unm.edu/~bmilne/bio576/instr/html/SOS/sos.html - introductionhttp://armyant.ee.vt.edu/unsalWWW/cemsthesis.html - self-organisation in mobile robotshttp://arti.vub.ac.be/www/chaos/bib.html - brief EOC bibliographyhttp://avs.iephb.ru/resinter.htm - self-organisation in biologyhttp://chaos.mur.csu.edu.au/complex/library/0Self-organisation.html - Virtual Library for SOShttp://foto.hut.fi/~markus/selforg.html - extensive links to SOS online papers/siteshttp://www.hia.com/hia/pcr/Kauffman.htm - self-organization as post-quantum physicshttp://hmt.com/cwr/boids.html - Craig Reynold's Boidshttp://ishi.lanl.gov/symintel.html - self-organizing knowledgehttp://lslwww.epfl.ch/~moshes/alife_links.html - Complex Adaptive Systemshttp://lumpi.informatik.uni-dortmund.de/alife - Complex Systems & ALifehttp://pespmc1.vu.ac.be - Principia Cybernetica Web Project, philosophical aspectshttp://physserv1.physics.wisc.edu:80/~shalizi/notebooks/self-organisation.html - SOShttp://physserv1.physics.wisc.edu:80/~shalizi/Self-organization/soup-done/ - Quantificationhttp://views.vcu.edu/complex - VCU complexity research grouphttp://websom.hut.fi/websom/ - WEBSOM Self-Org Mapshttp://www-personal.engin.umich.edu/~streak/bib - Complex Systems Bibliographyhttp://www.acm.org/sigois/auto/Main.html - Self-Org, Autopoiesis & Enterpriseshttp://www.alcyone.com/max/links/alife - Artificial Life linkshttp://www.astro.cf.ac.uk/pub/Jos.Thijssen/sandexpl.html - Java sandpilehttp://www.brint.com/Systems.html - Complex Systems & Chaos Theoryhttp://www.c3.lanl.gov/~rocha/ises.html - Selected Self-Organizationhttp://www.ccs.fau.edu - The Center for Complex Systemshttp://www.cogs.susx.ac.uk/users/ezequiel/alife-page/complexity.html - SOS bibliohttp://www.cpm.mmu.ac.uk/~bruce/combib - Measures of Complexityhttp://www.cpm.mmu.ac.uk/~bruce/combib/selforganizing.html - self-org measureshttp://www.ezone.com/sos - SOS on the Webhttp://www.fmb.mmu.ac.uk/~bruce/evolcomp - What is complexity ?http://www.krl.caltech.edu/avida - The Avida Grouphttp://www.neuroinformatik.ruhr-uni-bochum.de/ini/PEOPLE/fritzke/research/applications.html - Self-Organizing Networkshttp://www.physics.uiuc.edu/groups/complex.html - Complex & Nonlinear sciencehttp://www.radix.net/~crbnblu/assoc/oconnor/chapt1.htm - Systems Thinkinghttp://www.rwcp.or.jp/people/yk/CCM/HICSS27/paper/CCM-ProblemSolving.html - stochastic problem solving by SOhttp://www.santafe.edu - Santa Fe Institute (especially see the following)http://www.santafe.edu/sfi/publications/Bulletins/bulletin-spr95/12debate.html http://www.serve.com/~ale/html/cplxsys.html - Complex Adaptive Systemshttp://www.stud.his.no/~onar/Ess/Back_to_Basics.html - Complex Systems Theoryhttp://www.trincoll.edu/~psyc/homeokinetics/ - Homeokinekicshttp://www.wolfram.com/s.wolfram/articles/82-cellular/index.html - CAs as SOShttp://xxx.lanl.gov/archive/adap-org/ - Archive of Adapation/SOS papers

- Adami, Christoph. Introduction to Artificial Life (1998 Telos/Springer-Vertag). A good
introduction with included Avida software, covering the main concepts and maths - see
http://www.telospub.com/catalog/PHYSICS/ALife.html - Ashby, Ross. An Introduction to Cybernetics (1964 Methuen)
- Ashby, Ross. Design for a Brain - The Origin of Adaptive Behaviour (1960 Chapman & Hall).
- Badii and Politi. Complexity: Hierarchical structures and scaling in physics (1997
Cambridge University Press). Technical and detailed review of the scope and limitations of
current knowledge - see
http://www1.psi.ch/~badii/book.html - Bak, Per. How Nature Works - The Science of Self-Organized Criticality (1996 Copernicus). Power Laws and widespread applications, approachable.
- Blitz, David. Emergent Evolution: Qualitative Novelty and the Levels of Reality (1992 Kluwer Academic Publishers)
- Boden, Margaret (ed). The Philosophy of Artificial Life (1996 OUP). Essays on the concepts within the field, good background reading.
- Casti, John. Complexification: explaining a paradoxical world through the science of surprise (1994 HarperCollins). Takes a mathematical viewpoint, but not over technical.
- Cameron and Yovits (Eds.). Self-Organizing Systems (1960 Pergamon Press)
- Chaitin, Gregory. Algorithmic Information Theory (? Cambridge University Press) - see
http://www.cs.auckland.ac.nz/CDMTCS/chaitin - Cohen and Stewart. The Collapse of Chaos - Discovering Simplicity in a Complex World (1994 Viking). Excellent and approachable analysis.
- Coveney and Highfield. Frontiers of Complexity (1995 Fawcett Columbine)
- Deboeck and Kohonen. Visual Explorations in Finance with Self Organizing Maps (1998 Springer-Verlag)
- Eigen, Manfred. The Self Organization of Matter (?)
- Eigen and Schuster. The Hypercycle: A principle of natural self-organization (1979 Springer)
- Eigen and Winkler-Oswatitsch. Steps Toward Life: a perspective on evolution (1992 Oxford University Press)
- Emmeche, Claus. The Garden in the Machine: The Emerging Science of Artificial Life (1994
Princeton). A philosophical look at life and the new fields, approachable - see
http://alf.nbi.dk/~emmeche/publ.html - Formby, John. An Introduction to the Mathematical Formulation of Self-organizing Systems (1965 ?)
- Forrest, Stephanie (ed). Emergent Computation: Self-organising, Collective and Cooperative Phenomena in Natural & Artifical Computings Networks (1991 MIT)
- Gell-Mann, Murray. Quark and the Jaguar - Adventures in the simple and the complex (1994 Little, Brown & Company). From a quantum viewpoint, popular.
- Gleick, James. Chaos - Making a New Science (1987 Cardinal). The most popular science book related to the subject, simple but a good start.
- Goldstein, Jacobi & Yovits (Eds.). Self-Organizing Systems (1962 Spartan)
- Goodwin, Brian. How the Leopard Changed Its Spots: The Evolution of Complexity (1994 Weidenfield & Nicholson London). Self-organization in the development of biological form (morphogenesis), an excellent overview.
- Goodwin & Sanders (Eds.). Theoretical Biology: Epigenetic and Evolutionary Order from Complex Systems (1992 John Hopkins University Press)
- Holland, John. Adaptation in Natural and Artificial Systems: An Introductory Analysis with applications to Biology, Control & AI (1992 MIT Press)
- Holland, John. Emergence - From Chaos to Order (1998 Helix Books). Excellent look at emergence and rule-based generating procedures.
- Holland, John. Hidden Order - How adaptation builds complexity (1995 Addison Wesley). Complex Adaptive Systems and Genetic Algorithms, approachable.
- Jantsch, Erich. The Self-Organizing Universe: Scientific and Human Implications of the Emerging Paradigm of Evolution (1979 Oxford)
- Kampis, George. Self-modifying systems in biology and cognitive science: A new framework for dynamics, information, and complexity (1991 Pergamon)
- Kauffman, Stuart. At Home in the Universe - The Search for the Laws of Self-Organization
and Complexity (1995 OUP). An approachable summary - see
http://www.santafe.edu/sfi/People/kauffman/ - Kauffman, Stuart. The Origins of Order - Self-Organization and Selection in Evolution
(1993 OUP). Technical masterpiece - see
http://www.santafe.edu/sfi/People/kauffman/ - Kelly, Kevin. Out of Control - The New Biology of Machines (1994 Addison Wesley).
General popular overview of the future implications of adaptation - see
http://www.absolutvodka.com/5-0.html - Kelso, Scott. Dynamic Patterns: The Self-Organisation of Brain and Behaviour (? MIT
Press) - see
http://bambi.ccs.fau.edu/kelso/ - Kelso, Mandell, Shlesinger (eds.). Dynamic Patterns in Complex Systems (1988 World Scientific)
- George Klir. Facets of Systems Science (1991 Plenum Press)
- Kohonen, Teuvo. Self-Organization and Associative Memory (1984 Springer-Verlag)
- Kohonen, Teuvo. Self-Organizing Maps: Springer Series in Information Sciences, Vol. 30
(1995 Springer) - see
http://nucleus.hut.fi/nnrc/new_book.html - Langton, Christopher (ed.). Artificial Life - Proceedings of the first ALife conference at Santa Fe (1989 Addison Wesley). Technical (several later volumes are available but this is the best introduction).
- Levy, Steven. Artificial Life - The Quest for a New Creation (1992 Jonathan Cape). Excellent popular introduction.
- Lewin, Roger. Complexity - Life at the Edge of Chaos (1993 Macmillan). An excellent introduction to the general field.
- Mandelbrot, Benoit. The Fractal Geometry of Nature (1983 Freeman). A classic covering percolation and self-similarity in many areas.
- Nicolis and Prigogine. Self-Organization in Non-Equilibrium Systems (1977 Wiley)
- Nicolis and Prigogine. Exploring Complexity (1989 Freeman). Within physio-chemical systems, technical.
- Pines, D. (ed). Emerging Syntheses in Science, (1985 Addison-Wesley)
- K.H. Pribram (ed). Origins: Brain and Self-organization (1994 Lawrence Ealbaum)
- Prigogine & Stengers. Order out of Chaos (1985 Flamingo). Non-equilibrium & dissipative systems, a popular early classic.
- Salthe, Stan. Evolving Hierarchical Systems (1985 New York)
- Schroeder, Manfred. Fractals, Chaos, Power Laws - Minutes from an Infinite Paradise (1991 Freeman & Co.). Self-similarity in all things, technical.
- Schweitzer, Frank (ed.). Self-Organisation of Complex Structures: From Individual to
Collective Dynamics (1997 Gordon and Breach) - see
http://www.gbhap.com/abi/phy/schweitz.htm - Sprott, Clint. Strange Attractors: Creating Patterns in Chaos (? M&T Books).
Exploring types of attractor with generating programs - see
http://sprott.physics.wisc.edu/sa.htm - Stanley, H.E. Introduction to Phase Transitions and critical phenomena (1971 OUP)
- Turchin, Valentin F. The Phenomenon of Science: A Cybernetic Approach to Human Evolution
(1977 Columbia University Press). An online book covering similar concepts from an earlier
viewpoint, - see
http://pespmc1.vub.ac.be/PoS/ - von Foerster and Zopf (Eds.). Principles of Self-Organization (1962 Pergamon)
- von Neumann, John. Theory of Self Reproducing Automata (1966 Univ.Illinois)
- Waldrop, Mitchell. Complexity - The Emerging Science at the Edge of Order and Chaos (1992 Viking). Popular scientific introduction.
- Wolfram, Stephen. Cellular Automata and Complexity: Collected Papers, (1994
Addison-Wesley). Deep look at mostly 1D CAs and order/complexity/chaos classes - see
http://www.wolfram.com/s.wolfram/books/ca-reprint/ - Yates, F.Eugene (ed). Self-Organizing Systems: The Emergence of Order (1987Plenum Press)

Many studies of complex systems assume that the systems self-organize into emergent states which are not predictable from the parts. Artificial Life, Evolutionary Computation (incl Genetic Algorithms), Cellular Automata and Neural Networks are the main fields directly associated with this idea, all of which fall under the general auspices of Complex Systems or Complexity Theory.

- comp.theory.self-org-sys - self organizing systems & sponsor of this FAQ
- comp.ai.alife - artificial life
- comp.ai.genetic - genetic algorithms and evolutionary computation
- comp.ai.neural-nets - neural networks
- comp.theory.cell-automata - cellular automata
- comp.theory.dynamic-sys - dynamic systems
- sci.bio.evolution - natural organization and evolution
- sci.fractals - fractal and self-similar systems
- sci.nonlinear - nonlinear and chaotic systems

This FAQ has been compiled and is maintained by Chris Lucas of the CALResCo Group. Comments, suggestions, requests for additions and particularly criticisms and corrections are warmly welcomed. Please feel free to EMail me anytime at clucas@calresco.org or post relevant messages to the Usenet newsgroup comp.theory.self-org-sys for discussion.

Thanks are due to many people who have contributed to this FAQ either directly, by
discussion and questions, or by influential publications. Especially (in alphabetical
order):

Per Bak, Jack Cohen, Kelle Cruz, Erik Francis, Tim Haug, Francis Heylighen, Josh Howlett,
Stuart Kauffman, David Kirshbaum, Chris Langton, William Latham, Graeme McCaffery, Yuri
Milov, Mike Monkowski, Gary Nelson, Joseph O'Connor, David O'Neal, Craig Reynolds, Zed
Shaw, Clint Sprott, Ian Stewart, Stephen Wolfram, Andy Wuensche, Qi Zeng.

Particular thanks are due to

Usual get out clauses, I take no responsibility for any errors contained in the information presented here or any damages resulting from its use. The information is accurate however as far as I can tell.

This FAQ may be posted in any newsgroup, mail list or BBS as long as it remains intact and contains the following copyright notice. This document may not be used for financial gain or included in commercial products without the express permission of the author.

**Copyright 1997/8 Chris Lucas, all rights
reserved.**

Сайт создан в системе uCoz