Introduction

There has been a steady interest, intensified in the course of the last decade or two, mostly among cosmologists but also scientists from other fields,1 to look at the Universe (U) as an evolutionary entity. By that is meant that the process or processes that have taken place in U, from the initial Big Bang to the present time and beyond into the future, are evolutionary in nature. It also means that any evolutionary phenomenon that takes place anywhere within U, no matter how locally situated, is part (and parcel) of the same, all encompassing, evolutionary process, whether we refer to the genesis of particles, planet formation, the beginning of life on Earth or human history.

Set in such a context, “evolution” is bound to become, almost inevitably, more abstract and general. If that is going to be the case, then, the “evolution” that we might end up talking about could be rooted in more general principles, principles that are less context dependent from biology (but still applicable to it). By extending the reach of evolution to consider U, not only as an evolutionary entity, but as the ultimate source of evolution, we hope to gain some insights into finding a possible “thread” connecting the totality of the reality of our U.

Lee Smolin, a theoretical physicist, wrote a book where he uses Darwin’s idea of evolution by natural selection to further his research and understanding in relation to cosmological issues.1 He quoted Charles Peirce in a most revealing way:

To suppose universal laws of nature capable of being apprehended by the mind and yet having no reason for their special forms but standing inexplicable and irrational is hardly a justifiable position. Uniformities are precisely the sort of facts that need to be accounted for […] Law is par excellence the thing that wants a reason […] the only possible way of accounting for the laws of nature, and for uniformity in general, is to suppose them the results of evolution.2

Eric Chaisson, a well known cosmologist, has several books on cosmic evolution.2,3,4 He is interested in finding where the order, the form and the structure that characterize all material things, come from. He defines his field of interest as:

… the modern paradigm of cosmic evolution […] whereby changes, both gradual or episodic and generative or developmental, in the composition and structure of matter have given rise to galaxies, stars, planets and life.4

And then he states his goal:

We seek to know the nature and behavior of radiation, matter, and life on the grandest scale of all.4

In his work on cosmic evolution, Chaisson, looks for general patterns and commonalities across what it has been termed epochs.3 Chaisson again:

We can reliably trace a thread of knowledge linking the evolution of primal energy into elementary particles, the evolution of those particles into atoms, in turn of those atoms into stars and galaxies, the evolution of stars into heavy elements, and those elements into the molecular building blocks of life, and furthermore the evolution of those molecules into life itself, of advanced life forms into intelligence, and of intelligent life into cultured and technological civilization.4

If there is a thread that links the different phenomena throughout those epochs of U, that is, a particular recurring pattern (or patterns) that repeats itself through these different epochs, it should be explored, traced and made understandable in terms of a theoretical construct, some kind of framework where all this range of phenomena is encompassed. If this proposition is feasible it could very well become a fundamental area of research and also a natural way to bridge many fields of knowledge (a truly interdisciplinary setting). Moreover, it is the suggestion of this paper, to place complexity theory as a possibly sound context in which to implement the long bridge across all the fields where evolution is manifested.

Howard Resnikoff5 made the following comment (way back) in 1989:

…the ability of organisms to learn what they do not already know and the striking compatibility across species of the categories they create, suggests that there may be principles of pattern classification and categorization that are very general and perhaps even universal. And, if that be true, then the difficulty in drawing a sharp line between the quick and the dead -between the animate and inanimate matter- leads inexorably to the possibility that categorization and the creation of pattern may be a general and powerful feature of nature whose laws play a silent role in establishing and maintaining at every scale the order that we find in the universe and within us.5

From the perspective of nonequilibrium thermodynamics, Chaisson puts it very bluntly:

When reduced to essentials, life is not much different, apart from its degree of complexity, than galaxies, stars, or planets. All these structured systems, including life itself, increasingly order themselves by absorbing energy from, and emitting entropy into, their surrounding environments.4

In a relatively recent paper by Carlos Gershenson, an attempt is made to generalize (“to extend”, using his term) information so that one can have the whole range of reality present within the physical universe as the subject of study, starting from elementary particles all the way to galaxies and including any phenomena within it, in particular, life and cognition. He makes very clear that:

This work only explores the advantages of describing the world as information. In other words, there are no ontological claims, only epistemological.6

There is good reason to place such a caveat. We cannot affirm with any certainty if there is an ontological thread from the beginning of U all the way to here and now. Many scientists from many fields are inclined or even convinced that there is such a thread, a set of links connecting us to the beginnings of U. The problem is that it is not known what that thread might be. Gershenson assesses:

If atoms, molecules and cells are described as information, there is no need to a qualitative shift (from non-living to living matter) while describing the origin and evolution of life: this is translated into a quantitative shift (from less complex to more complex information).6

His observation points to an important issue: that of qualitative change. How should such a change be understood within a conceptual framework2, as the Universe becomes more complex? Murky and difficult as it is, this subject of inquiry should be explored given that it is potentially a fertile territory.

I want to end this Introduction with a quote from the late Brian Goodwin:

Integrating biology and culture with physical principles will be something of a challenge, but there are already indications of how it can be achieved. For example, the self-similar fractal patterns that arise in physical systems during phase transitions, when new order is coming into being, have the same characteristics as the patterns observed in organismic and cultural networks involved in generating order and meaning. The unified vision of creative and meaningful cosmic process may well replace the meaningless mechanical cosmos that has dominated Western scientific thought and cultural life for the past few hundred years.7

A problem in search of a solution. The genesis of evolutionary systems

My field of inquiry is social systems. A good number of years ago I became motivated by the issue of technological transfer among countries; specifically, between developed and less-developed countries. The idea behind the technological transfer was mostly twofold: to improve living conditions (food and water supplies, health, etc) and to induce development, the transfer playing the role of a catalyst.8 At the time (1979), the record had not been good, particularly concerning development, success being the exception rather than the norm.

My initial understanding of why the record on this issue had been poor was that, at the core of the failures, there was a lack of understanding of the principles involved in any act of transferring technology, which is an interaction between two different social systems by means of technological artifacts.

The problem had the added component of being a “transfer” between socio-economic systems with substantially different cultural contexts. So there was also a need to understand the impact, on the “transfer” processes, of the stark differences in cultural contexts involved in the transfer.

Based on these initial perceptions of the problem, any framework to be put forward had to address three main systemic subject matters:

  1. proto-systemic: the nature and definition of social systems stemming from their specific cultural contexts.6 Also, within it and importantly: should those proto-systems base their definition on the assumption that their respective contexts are separate from the systems, that is, contexts being as a given backcloth (which was the usual way at the time), just as the space-time background was defined in Physics until recently? Or alternatively, should proto-systems and their contexts be considered as an inseparable whole where the systems are defined as a set of restrictions or constraints on their own environments (a much more intrinsic and deeper interconnection between the two)?

  2. intra-systemic: the unfolding of the system, i.e., development and developmental processes within the system;

  3. inter-systemic: relations between systems, each one stemming from a different cultural and geographical context, at least when developed and less-developed countries were to be involved.

I worked on a conceptual scheme to be called Evolutionary Systems Framework, ESF for short.9,10,11 This attempt was initially stated as a heuristic set of principles trying to deal with two types of change: evolution and development.7 Later, it became clear that the potential applicability of the framework was much broader than initially intended.8

One of the difficulties often encountered at the time when describing problems of evolution and development was the lack of a fully adequate formal representational language. In many cases, assumptions about the natural systems do not match the definitional boundaries of the mathematical structures chosen for their representations. We cannot blame the mathematician for not figuring out what the biologist or social scientist, attempting to formalize in their own fields, is going to need or do with those formal constructs. So, biologists, social scientists (let’s not forget physicists and other “hard” scientists too), are often left facing what turns out to be subtle formalization issues that could only be properly stated and resolved at the foundational level.9

An important premise of the framework was to assume a process oriented approach, asserting the relational nature of social systems. These relations were not a given for the system but had to be socially constructed, i.e., they were the outcome of a dynamic process that had to have taken place in space and time throughout the system and in interaction with the environment. Space and time are not absolute but context dependent10 and therefore, in order to define a social system, its spatio-temporal grid had to be determined (the very subject of proto-systems).

The framework ended up being a set of two hierarchical orderings (or organizing principles): one considering the system’s growth through spatio-temporal development (hierarchy), the other considering the specific evolutionary status of each interacting system (metahierarchy).

A framework and some principles

Let me start this section by quoting Chaisson again:

Emerging now is a unified worldview of the cosmos, including ourselves as sentient beings, based upon the time-honored concept of change. Change -to make different the form, nature, and content of something- has been the hallmark in the origin, evolution, and the fate of all things, animate or inanimate. From galaxies to snowflakes, from stars and planets to life itself, we are beginning to identify an underlying pattern penetrating the fabric of all the natural sciences– a sweepingly encompassing view along the ‘arrow of time’ of the formation, structure, and function of all objects in our multitudinous Universe.4

The intent of this paper, written in a broad, descriptive way, is to reinforce the need of this “underlying pattern” that Chaisson repeatedly refers to in his work.

All changes, whether developmental or evolutionary, are “patterns” (or organizing principles) present throughout cosmic evolution. Because of that, they should be presented as part of a framework. We propose an Evolutionary Systems Framework, where some “principles” are at play. The principles involved are three:

  • Principle of Combinatorial Expansion (PCE);

  • Principle of Generative Condensation (PGC);

  • Principle of Conservation of Information (PCI).

Principle of combinatorial expansion

This principle deals with the developmental process in which an evolutionary system (ES), coming into being, unfolds through a growth process to reach its full potential and eventually its senescence and death or some kind of asymptotically stable state (depending on what kind of system).

The developmental or growth process takes the form of a hierarchy of spatio-temporal levels (scales), built from the bottom up. It is implemented through systemic interactions within the system’s components and between those and their surrounding environment.

A system’s levels or scales of interactions are sequentially built, one onto another, that is, the lowest level in the hierarchy is the first and takes place at the shortest space-time scale of the system. The components involved at this level are the most elementary ones. Also, these first level interactions are the ones with the strongest binding force. The interactions are a kind of discrimination process that establishes, in the end, an ordering within the initial elementary components as equivalent classes but -importantly- only at the scale of interaction for that level. At one point, at that level, there will be sufficient “discriminated” components so that a reverse process within the level can begin to take place. This is a process of “grouping” together the elements from the different equivalent classes. By so doing, the system is initiating the build up of components for the next level up, each component made out of groups of components of the level below.

Here is the gist of this principle: from a collection of simple, identical components, through interactions among the components and those components with the environment, the system builds levels ranging from the shortest scale to the largest one, relating components in each level via coupling constants11 which, themselves, are determined by the inverse of the number of states of that level (or state space).

Principle of generative condensation (PGC)

This principle deals with the process of generating new ESs from previous/older ones.12 This kind of process takes place on already developed ESs, in the clusters or groupings at their highest level. An ES, having reach full development, has in its last level a population of clusters linked by the weakest coupling within the system. With the continued expansion of the system, the links among clusters become weaker, to the point that the coupling force begins to affect more the individual clusters within themselves than between clusters.

Left to themselves (and to their own couplings) the clusters go through a phase of increasing environmental pressure starting a process of contraction. As the pressure mounts and becomes stronger than the couplings of the lower levels, a break-up of the relational structures of the previous system begins to happen.

In the end, either there is a complete disintegration of the (previous) ES, or the components at the lowest levels are able to withstand the external pressure and, eventually, rebuild again.

At this point the important thing to note is that, the set of “new” elementary components, are not the same as the elementary components of the previous ES. These new ones have been, in fact, enriched. Some of the relational structure (which includes, of course, the components that carry such a structure) of the previous ES were not disintegrated in the contraction/condensation process and, therefore, become embedded as part of the elementary components of the “new” ES. So, in the end, that “new” system, is an enriched system, with added complexity. If we were to measure the shortest algorithm that could reproduce the entire system, there would be a noticeable difference, in length, between the algorithm that reproduces the previous ES and the next one stemming from it, the latter one being the longer.

Ultimately, an ordering could be established among all the ESs (as they come into being from previous ones), establishing a sequence of systems of increased complexity. This sequence could be named a metahierarchy since it is, in fact, a hierarchy of hierarchies and it could be somewhat similar to Chaisson’s “epochs.”

From its very beginnings, the universe has gone through a process that in the general sense is similar to that of the origin of life. A sequence of evolutionary events is structuring the universe where each emergence is the starting point for the next one in the next level.12

Principle of conservation of information

The issue this principle is addressing arises whenever a (new) system comes into being from a previous, less complex one. And what the principle tries to encapsulate is the need, within the new system, to accumulate -by compressing- all the relevant information present in the previous system. By carrying such accumulated information, the new system can then face new, more complex environments which will demand more complex tasks.

The accumulated information from the “old” system is necessary for the new system to survive in a new environment, to the extent that the old environment is embedded in the new environment. But, fulfilling that necessity is not sufficient because the new system will have to face new situations that will require new abilities or patterns of behavior.

Lets put it in a slightly different way, the new system would not survive if it didn’t carry with it all the information from the old system that is relevant for the new. But that information, being necessary, is not sufficient.

There is a need to compress the “functional” or “dynamical” information of the old system into the new one by storing (or accumulating) such information as structure. So, the functional information (information that came about through interactions) in the previous system, becomes stored as structure (hardware, so to speak) in the new one.

There is the old saying that “Nature abhors a vacuum.” In this case, it is about the “information” vacuum.

The need to include past functional information from the old system into the “building blocks” (structure) of the new one, reminds us, to some degree, of the construction of a new (higher) level from a previous (lower) level within the same system’s hierarchy. Within the system, it is necessary to find a way to compress the information available in the lower level, before initiating the construction of a new level.

The difference between the generation of a hierarchy and the generation of a level within a hierarchy is, at least, twofold:

  1. In the generation of a hierarchy, random access to information can play a role, sometimes a critical one, as in the case of mutations.

  2. The impact of whatever changes might happen in the generation of any new system is always evolutionary, whereas in the generation of levels within a system it is developmental.

The source of commonality among the differing cosmic systems

We come to an important point. To find the source of commonality among systems in the context of cosmic evolution is, to a considerable extent, to find the “common thread” mentioned at the beginning of the paper by Chaisson.

Here is a succinct answer: every single system, from the physical universe, U, up to us humans, all under the rubric of “cosmic evolution,” faces the same daunting challenge: to model an environment that is bigger (most of the time, much bigger) than the system and the system’s resources to model it. Even in the case of star formation, where the new elements are “cooked” inside the stars, the challenge is also environmental pressure.13

All the characteristics of the cosmic systems, all the principles involved in them, seem to veer towards facing that same challenge, successfully. Successfully indeed if we consider the restrictive implications of the Second Law of thermodynamics and how, in spite of it, there is so much structure and complexity throughout U.

Alternatively, we can think about this issue as the quintessential combinatorial explosion problem,14 a problem that has been overcome every step of the way from the Big Bang to now, as reality stubbornly tell us, by example, of its successes.

There are some common characteristics of cosmic systems (to be presented later in the paper) that ultimately point towards a commonality of structural design. We propose that such commonality of design is a response to a common problem. All systems have to interact with and model their respective environments; environments that are considerably bigger than the systems themselves.

Systems that develop, like open, self-organizing systems, have two ways to minimize the impact of the combinatorial explosion: indistiguishability15 and hierarchies. The first, initiates the process of systemic growth at the lowest possible degree of complexity (lowest entropy); the second, through level construction within the system’s hierarchy, slows down the relational exponential growth (and, consequently, its entropy).

The developmental change of an open system is materialized by its growth. Because the growth process takes place through the continuous interaction of the system’s components, within themselves and with their environment, we need to know how these relational dynamics are manifested.

As previously mentioned, open, natural systems consist of large numbers of relatively elementary, identical (indistinguishable) components that:

…can exchange information through a communication network that binds them together. This is a description of a parallel or concurrent computer…5

This analogy with parallel, concurrent, distributed architecture in computers is useful because, if for no other reason, it points towards an actual physical model, a model that can also be used as a simulation device. Moreover:

Since the elementary components are assumed to be identical as well as simple, it follows that complex behavior can only arise as a consequence of collective properties of the system mediated by the topology of the communication network and the ease with which its links can transmit information”5

Let us present the situation again. We start with a large number of system components that will interact among themselves and with the surrounding environment. If, from the beginning, every component were to interact with any and every other component, plus the interactions with the system’s surroundings, the number of connections would become insurmountably large. We are in the presence of a “combinatorial explosion,” where the system collapses under the shear number of interconnections.

At this point we could make good use of Ross Ashby’s recommendation:

… when a designer attempts to design […] a system in which all the parts interact fully, complexity at the outputs can often be ignored: it is complexity at the inputs of the system that is to be feared.13

Ashby also made, many years ago,14 two additional points concerning complexity that strengthen the view of his quote. One is that there is a hidden simplicity, within systems, that can be exploited to the modeler’s advantage, by reducing the order of the relations in the system’s formalization. The second point is the link he made between fully connected systems and gross instability and, therefore, non-viability.

Keeping this scenario as simple as possible for clarity,16 the designer should use parallelism and concurrency as a way to reduce the number of inputs and not sequentiality were elementary units are connected from the start to other units and thus, to increase the initial complexity of the system.17 In other words, the designer can use architecture to drastically reduce the number of initial inputs. The second avenue by which the designer can reduce inputs is by means of hierarchies. The designer should find a way to substantially compress the state space of each level so that, for the construction or implementation of the next level, the compressed version of the lower level state space is used to build it.18

Each new level gives, to the system in question, access to a wider scale of environmental phenomena with which to interact and increases its sustainability and, hence, its stability as a system.

It was mentioned that, in developmental processes, there is a conflict or dilemma between an increase in complexity -as the system develops- and the stability of the system. We saw that using “hierarchical decomposition” 15 and parallel, concurrent, distributed architecture in the system, was the way to reduce the increase in the number of connections through the growth process. Talking about the former, Nicolis states:

(…) systems possessing strange attractors effectively compress the behavior of a system whose number of degrees of freedom equals the dimensionality of the state-space in which the strange attractor is embedded as a compact subset.15

In relation to the latter option, Resnikoff says:

The use of hierarchical organization to increase efficiency is intimately related to the notion of parallel processing of information.5

Summing up. The big problem or dilemma of any open, self-organizing system, in relation to development and developmental changes, is the incompatibility -in principle- between complexity and stability, between growth and the system’s controllability: the two sides of the same coin.

Both sides are essential to the system and to the system’s solution to the challenge of development.

One last thing before closing this section: it is very important to emphasize the difference between the change occurring in building a new level within a hierarchy (developmental change) and generative or evolutionary change:

  • In both cases there is a change from function to structure (or from software to hardware, using the computer analogy).

  • The difference between them is:

    1. In a generative change the change is radical, the change affects the initial building blocks and, therefore, the whole system.

    2. In a developmental change, moving up a level (within a hierarchy), is a relative change, it is a change in spatio-temporal scale and components which are groupings of lower level components, but in no way affects the nature of the initial building blocks of the system in question.

A generative change implies an evolutionary change.

A developmental change implies a spatio-temporal growth (within the hierarchy) of the system.

Indistinguishables

Recalling Ashby’s warnings mentioned above, we could ask ourselves: what would be the most optimal design for a system addressing the combinatorial explosion? The elementary answer to this difficult and crucial question is as follows: the one in which the system starts from an initial state of minimal connectivity which, formally speaking, translates to relations of the lowest order (equivalently, lowest entropy). So, taking this position to its logical conclusion, we should aim at designing a system that starts with zero connectivity between system components or, alternatively, for its formalization, exclusively involving relations of the lowest order (i.e., unary relations) which, as Ashby reminded us, are mere properties of the system.14

Given a system made out of many parts, we invoke indistinguishability when no part of such a system can be differentiated from any other.

The standard formal procedure for representing indistinguishability, whether in Physics16 or in more general scientific contexts,17 is done in terms of direct sums. By using direct sums we make a very strong reductionistic assertion about the nature of the system itself and, thus, reduce the field of applicability of indistinguishables to a very narrow class of very simple (mechanistic) entities.17 If this were to be the only option, then, it would leave a broad and important range of systems (open systems) outside of the realm of indistinguishability.18

Rosen’s synthetic and analytic approches to modeling

Robert Rosen has written at length on the issues of formalization and modeling of systems. He ranges across a wide variety of systems, from quantum physics all the way to social sciences. His last book17 is a remarkable work; and the previous one, Anticipatory Systems,19 practically established a whole new field of inquiry.

In Life itself,17 Rosen divides the area of modeling into two main approaches: synthetic and analytical.

  1. Synthetic models: The very essence of this modeling scheme is the underlying assumption that a system, X, can be comprehensively conveyed by the union of disjoint (non-overlapping) subsystems or fractions of X. Moreover, those “fractions” have to be context-independent so that a particular definition is valid in any environment.20 The mathematical representation of such a system takes the algebraic form of direct sums (in set theory, union of disjoint sets). This is clearly a reductionist view. By accepting fractionation it is implied that none of the connectivity among systemic components has any relevance for the system as a whole.

  2. Analytic models: This approach starts with the whole system as a given and then proceeds to tear it to pieces. The system is defined as a cartesian product or direct product. Any piecewise solution will be implemented by means of quotient sets. Because what is given in this case is the totality of the system, we cannot assume disjointedness among subsystems and, any severing of the whole in terms of quotient sets will not permit us to recover the system in full. According to Rosen, the inability to recover all the systemic information from the totality of the pieces into which the system has been cut, is a manifestation of the irreducibility of semantics into syntax.

The big dilemma in the synthetic-analytic dichotomy is the following: (1) If we start with a disjoint union of subsystems ( “fractions”), we can construct the system, but, by assuming fractionality, we dismiss connectivity between subsystems as irrelevant. This amounts to saying that whatever systemic information there is, is all inside the subsystems (it is from that inner information within the “fractions” alone, that a system can be built). (2) If we start with the system as a whole, it cannot be partitioned (analyzed) without losing essential information (the connectivity among parts); so much so that once severed into pieces of analysis, the system as such is lost. Rosen’s views on fractionality are that only simple systems, i.e., machines, can be fractionated and built from pieces. In this particular class of systems, syntax and semantics coincide. When that is not the case, we are in the presence of complex systems. In Rosen’s own words:

The identity of these two quite different ways of talking about ‘states’ of X is a direct consequence of supposing that ‘analysis’ into fractions, and ‘synthesis’ from these same fractions, are inverse operations20

A third option for modeling

The basic consideration I want to make about what was said in the last section is that the synthetic and analytic approaches given by Rosen, in the formalization of natural systems, do not necessarily cover all the possible options.

In physics16 and elsewhere, direct sums and direct products are defined from an initially given universe. If we maintain this scenario, our formalizations of natural systems will also have an initially given universe: in this case, one representing what we call the environment.19

The fact that direct sums and direct products are defined from the continuum20 is relevant. First of all, concerning direct sums, it means that those entities are distinct in terms of numerosity. In other words, at the very least, we can see or measure how many there are. What does this mean? It means that the formalization that we have chosen allows us to represent the entities so that they can be observed, as a totality, from the outside. How else could we see their numerosity if we weren’t observing those entities from the outside? We can subsequently infer that this is a macro-view; it is “macro” because, even if the relational information that binds the pieces ( “fractions”) together is missing, that information turns out to be, in the case of direct sums and as noted by Rosen, trivial. What is not trivial is the fact that we can see those pieces in their entirety, that is, all at once. To see the pieces means that there is a given spatio-temporal context within which those pieces can be seen.

In the case of direct products it is more apparently clear that the observer is outside of the system and staging a macro-view of it because the system is formalized as a whole from the outset. The common ground between direct products and direct sums is the subsumed assumption that a spatio-temporal background is given. This is, precisely, the problem. From a process oriented perspective as in our case (a developmental process), spatio-temporal relations are not there as a background beforehand. The system builds those relations: that is the task of development. What I see missing in Rosen’s approach is the process of development itself (within the system). It seems to me that Rosen starts with a fully developed system and it is that developed system that he studies and to which he applies the synthetic and analytical formalizations. By not taking into consideration the process of development, the synthetic approach looses all its validity in the realm of open systems.21

From model design to system building

Order is a clear challenge to any system coming into being.22 Order is costly, materially costly. At the same time, a system without order cannot exist.

The construction of order by the system has to be cost-efficient. The challenge of any system open to the environment is to accommodate two opposing fundamental necessities: the need to expand over the environment (or, looking from the inside, the need to unfold the system’s potentiality) and the need to control that environment (looking from the inside again: to model that environment). The only way to maintain a balance between those opposing drives is to have a system that, coming into being, starts with just a set of initial properties (lowest order relations) and expands over the environment by means of a minimal set, or hierarchy, of scales (in other words, levels).21

How can we visualize and formalize a system defined by unary relations? The standard formal procedure of direct sums does not go deeply enough into the phenomenon of indistinguishability to allow us to do that. The essential conceptual piece that is missing in the way we presently see (or formalize) indistinguishability can be placed along the lines of the exo-endo dichotomy.22 In fact, the version of exo-indistinguishability is the one portrayed by the direct sums formalism: the observer can see the (indistinguishable) pieces.

An observer is anything but a neutral constituent; s/he always brings with his/her presence, at the very least, a given spatio-temporal backcloth. The only case in which the observer does not bring anything from the exterior into the system is when s/he is inside the system. But there is more. For developmental systems or systems that accumulate information or engage in any kind of refinement process with the environment, there are different stages which carry different (endo as well as exo) views. The endo-observer will always be within the contours of the systemic relational web,23 but s/he can be placed at the bottom of that web, that is, at the initial, local, short-ranged, lower order relational level or, at the final, top, global, long-ranged, higher order relational level or, anywhere in between. In other words, the relational web is a hierarchy of relations among system components and between system components and the environment and, as such, contains several levels. The very lowest level is the one containing unary relations (Ashby’s “properties”): that is where endo-indistinguishability lies. Inside the system, and between consecutive levels, it is possible to see, from the level above, the level below; this is a control relation.24

Endo-indistinguishability is the one which places the observer inside the system and at its lowest possible level of order relations, that is where the system must be defined: only in terms of properties applying to each and every one of the system’s components. Formalization by means of Rosen’s direct sums at this lowest, most primitive level of the system’s existence, cannot convey the phenomenon at play. We have to realize that, initially, there is no spatio-temporal connectivity between components: each component is only related to itself, that is what the meaning of “unary relation” ultimately is, according to Ashby25. In that context we cannot claim any sort of distinctiveness, not even numerosity. In natural systems, the counterpart to numerosity implies some kind of distinction in space and in time, although no distinction between components.

In a previous paper,21 I pointed out the need to consider three kinds or aspects of indistinguishability: spatial, temporal and component related. If we were to choose some hardware realization of those concepts, we could say that “component” indistinguishability takes the form of sameness of system objects; “spatial” indistinguishability takes the form of parallelism; “temporal” indistinguishability takes the form of concurrency.

Some characteristics of systems that develop26

  1. They are natural systems and therefore are open: systems that exchange energy and matter with their environment and, in some cases, also information.

  2. They are composed of large numbers of elementary, identical (indistinguishable) components.

  3. Those elementary components are either all present from the beginning or -more commonly- are generated or brought into the system through the matter/energy exchange with their specific environment.

  4. The components interact among themselves and with the surrounding environment.

  5. Through the process of interaction, new systemic elements come into existence by combining components into groupings to form new levels of components.

  6. This process of making new levels of components by grouping them, allows the system to grow and thus, establish what ends up being a hierarchical structure.

  7. The integrity and stability of the levels is maintained by the strength of their binding force (coupling), which is equivalent to the inverse of the number of states in the state space of that particular level.15

  8. In this hierarchy, a space-time relational systemic structure is constructed and, within it, a model of the environment is implemented.

  9. These systems can be defined in terms of global properties. These global properties resemble Planck’s natural units.23 They serve as “filters” or “ontological gate keepers” of the system in question. Anything “filtered” is systemically apprehended, that is, it can, as it were, be metabolized by the system. Anything below or above the realm of those “filters” cannot, by definition, be dealt with systemically.

  10. The systems are built bottom-up and analogous to a parallel, concurrent, distributed computer architecture.

  11. They are self-organizing systems.

  12. Their topology could be self-similar, self-affine or, in biology, an allometric growth process.27

  13. Their basic mode of interaction is through “discrimination” and “integration.”

Discussion

We need to look at evolution on its grandest scale (remember Chaisson). We cannot afford not to look at one of the most potentially salient characteristics of our broadest of environments, our universe: the possibility of a thread that links, through all the different epochs, the history of our U.

This “thread,” in essence, has to be a certain mechanism of an evolutionary nature, capable of rendering the whole sequence of epochs. ESF is an attempt to portray such an evolutionary mechanism schematically, i.e., in a simple and general form. We have to negotiate between being general enough to be valid through all the different epochs but not too general so we don’t end up harvesting trivialities.

At the beginning of the paper, complexity theory was mentinoned as a potential context were we could develop “the long bridge across all the fields were evolution is manifested.” We know that all the systems involved in the cosmic evolution scenario are natural systems, which are open to the environment and, therefore, by any standard definition of complexity, they are all complex systems. The emphasis in ESF is on the processes within the cosmic evolution and not complexity per se. Moreover, we are referring to natural systems and not artifacts.28

What can be seen in this 13 plus billion years journey of our U (through the ESF prism) is a sequence of cycles (or epochs?), each cycle stemming from a previous one. Throughout this sequence of cycles there is an evolutionary mechanism at play, the one invoked by the principle of generative condensation (PGC). The main factor present in this mechanism is the formation of qualitatively new, more complex, systems from older ones and, thus, acting as a bridge between cycles from the beginning, to now, and into the future. How this bridge is built is what the PGC is all about.

The sequence of cycles, as a whole, reflects a process of increased complexity within each new system and its collection of elementary components. So, each cycle, represents a more complex system or set of systems than the previous ones and, a fortiori, the sequence itself (the “thread”) is a progression towards increasing complexity. Also, as complexity increases, the environments from which the more complex systems stem, become smaller, to the point that we can notice an inverse relation between environment size and degree of complexity. In a sense, we could say that it is somewhat analogous to the relation between “intension” and “extension” in philosophy, logic or linguistics.24

This inverse relation, as the sequence of cycles progresses, means possibly having many qualitatively new systems within an epoch. This is true in the chemical, biological and cultural epochs and particularly the last two epochs.

Now, if we shift our perspective from cycles between systems to changes that occur within a system, the mechanism changes and the principle involved changes too: it is the principle of combinatorial expansion (PCE).

There are two aspects in PCE. First, by defining a system as an initial collection of many simple and similar (indistinguishable) components, we separate semantics form syntax, that is, the attributes that determine the collection of elementary components (semantics) are separated (by definition) from the process dynamics that will take place as the system unfolds and expands. What the (evolutionary) system gains with the choice of parallel, concurrent, distributed architecture is a dramatic reduction in the inputs, which in turn implies a comparable reduction in the use of the system’s resources invested in the flow and storage of information. Evolutionary systems are this way because they couldn’t be viable or sustainable in any other way.

Second, the other major characteristic, aimed also at reducing the excessively fast relational/combinatorial growth in the system, is the space and time compartmentalization that occurs in each construction of a new level within a hierarchical evolutionary system.29 That task is implemented via interactions of the system’s components which eventually renders a minimal set of values capable of generating that particular level and which is used to build the components of the next level up (this is a quantitative change). Also, the inverse of the cardinality of the state space is the coupling constant for that level, indicating the strength of the links among components in the level.30

Finally, the role played by the third and last principle, the principle of conservation of information (PCI), resides in making sure that, in the process of “absorbing” functionality into structure (or syntax into semantics) rendering a qualitatively new ES, the new system does not lose any prior relevant information that otherwise would make it vulnerable to its environment and, therefore, not sustainable.

As it is hopefully clear by now, the three principles that are proposed here are aimed at making manageable the huge amounts of information that U and the systems that populate U, have to face and endure. We know of the success in fulfilling that challenge by their mere existence; we just need to know how it is done.

The need for a mathematical language based on indistinguishables?

I believe it is important to talk about formalism within the field of Complexity. In the course of this paper we have emphasized the importance of indistinguishables with respect to the difficult task of reducing the initial number of relations among system components.

To have a collection of elements with cardinality N and ordinality 1, poses some questions about how able set theory is to formalize complex systems.

Evolutionary systems and complexity in general are largely about constructing “order” from non-ordered collections. The equivalent physical counterpart to that is a physical process that starts, necessarily, with the lowest entropy possible and ends up with the minimally possible higher entropy.

Also, in the formal setting, we need to be able to define hierarchical structures from the initial collection of indistinguishables. Statistical Mechanics allows us to compare the micro and the macro but we don’t have much control over specific components of the collection beyond that basic two level distinction. Algebraic methods and combinatorics should give us the ability to better control the elements of the collection. They will allow us to build hierarchical structures with more than two levels.31

In more general terms, in a published book on complexity,25 the author, towards the end of the book, talks about the need for a new formalism:

We need a new vocabulary that not only captures the conceptual building blocks of self-organization and emergence but that can also describe how these come to encompass what we call functionality, purpose, or meaning.25

She also quotes a well known scientist in the field of complexity, the mathematician Steven Strogatz, on the issue of formalization:

I think we may be missing the conceptual equivalent of calculus, a way to seeing the consequences of myriad interactions that define a complex system.25

Finally, Mitchell mentions how Newton invented calculus in order to advance his ideas on the science of dynamics. She asks herself:

Can we similarly invent the calculus of complexity -a mathematical language that captures the origins and dynamics of self-organization, emergent behavior and adaptation in complex systems?25

Conclusions

It is the author’s hope that the subject “cosmic evolution,” where the emphasis is on the connecting thread that links all natural evolving systems, will become a more preeminent active field of research in Complexity.

The ESF proposed here could be a starting point to look for commonalities in each and every field involved in the said cosmic evolution. If we were to use this or another more appropriate frame in specific fields, we might be able to develop a common language where comparisons and discussions between fields could take place in a more natural and fruitful way.

Footnotes

1 Ellis, 2004; Ellis et al, 2013; ; Goodwin, 2006; Lineweaver et al, 2013; Meyer-Ortmanns, 2011; Salthe, 2010; Salthe & Fuhrman, 2005; Smolin, 1997, 2013; Vidal, 2008.

2 Smolin’s quote comes from an article that Peirce wrote in The Monist about the “architecture of theories.” The Monist, vol. 1, No. 2 (January 1891), p. 161.

3 Particulate, Galactic, Stellar, Planetary, Chemical, Biological and Cultural epochs.

5 He was director of the Division of Information Science and Technology at the National Science Foundation in 1979-1981 and later founder of Thinking Machines Corp.

2 We will state our understanding of this issue later in the paper.

6 These contexts play a sort of environmental role (cultural environment), along with their physical environment, in their respective social systems.

7 “evolution” was to be concerned with proto-systemic aspects; “development” with intra-systemic phenomena.9,10

8 I owe this broadening of the scope of the framework, in part, to being a member of a UBC Faculty group on “general systems” and, also, to a book by the cosmologist Hubert Reeves, Atoms of Silence,26 which made me realize that the Universe is a historical entity and, therefore, is one among many evolutionary systems: it is the first (from which all others have stemmed from).

9 We will address, briefly, this issue towards the end of the paper.

10 This was not well understood or generally accepted in Social Sciences in the early 1980’s.

11 “In physics a coupling constant, usually denoted g, is a number that determines the strength of an interaction […] A coupling constant plays an important role in dynamics. For example, one often sets up hierarchies of approximation based on the importance of various coupling constants” (http://en.wikipedia.org/wiki/Coupling_constant).

12 For more details on this principle there are articles that can be found.10,21

13 “[…] a large ensemble of particles in the absence of gravity will tend to disperse, yet in the presence of gravity will tend to clump; either way, the net entropy increases.”.3

14 In mathematics a combinatorial explosion describes the effect of functions that grow very rapidly as a result of combinatorial considerations (http://en.wikipedia.org/wiki/Combinatorial_explosion).

15 Indistinguishability will be dealt with later.

16 For more detailed analysis on this issue see Ashby, 1972; Alvarez de Lorenzana, 1998

17 This is a validation of indistinguishability.

18 It could be attractors. It could also be a group of components locked-in because they interact efficiently with the environment and, therefore, become a sub-structure that will work as a unit in the next level up.

19 There is a big difference between having the environment as a given background from introducing the environment as part of the internal structure of the system components through “global properties.” In the former there is no real link between the two beyond their mutual interactions; in the latter there is an essential connection between them: the system is defined as a set of constraints on the environment (in other words, it is a kind of refinement on the environment).

20 In both operations the underlying set is the real numbers, ℜ.

21 I have great admiration and have benefited greatly from studying Rosen’s work. It is this one issue that I think he overlooked.

22 See Alvarez de Lorenzana, 1998.

23 Before the relational web begins to be constructed, there is the state of indistinguishability, where the system comes into being. In this state there are no relations among components yet.

24 “By self-organization we mean the ability of an open system to simulate its environment or even parts (lower hierarchical levels) of itself”.15

25 We could say, following Ashby, that a system of N components starts with N-unary relations and ends up with one N-ary relation. In other words, the system, in its process of development, starts with only individual relations among components (lowest order relations), and ends with one highest order relation where all components are related. This is, in my view, an expression of finality without overly teleological connotations.

26 What I have been calling Evolutionary systems.

27 “The growth of body parts at different rates, resulting in a change of body proportions”.27

28 Artifact: An object made by a human being, typically an item of cultural or historic interest (NOAD, 2010).

29 “What type of compromise can be worked out between the seemingly irreconcilable tendencies of stability and complexity? An answer seems to be: ‘compartmentalization’ or ‘hierarchical decomposition’ “.15

30 There is a considerable body of work, accumulated over more than thirty years, concerning the formalization of the so called “combinatorial hierarchy.” The official launch of this work was a paper published in 1978 in the International Journal of Theoretical Physics.28 See also Pierre Noyes book.29

31 According to Smolin, Einstein was struggling, towards the end of his life, with how to describe physics without things moving in a fixed space-time. What he came with was that: “Fundamental physics must be discrete, and its description must be in terms of algebra and combinatorics”.30