(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Recent Additions

V. Systems Evolution: A 21st Century Genesis Synthesis

D. An Evolutionary Intelligence Arises

Power, Daniel, et al. What can Ecosystems Learn? Expanding Evolutionary Ecology with Learning Theory. arXiv:1506.06374. This posting is a follow up to a 2014 paper The Evolution of Phenotypic Correlations and “Development Memory” in Evolution (68/4, Richard Watson) that introduced affinities between life’s ascent and cerebral cognition. The interdisciplinary authors from the UK across Europe, including Watson and Eors Szathmary, continue and expand upon a novel perception of life’s developmental emergence as a dynamic ecological coherence via a connectionist cognition of self-organizing neural networks. By this “deep homology,” prior evolutionary and ecosystem “memories” accrue with which new experience can then be accommodated. In a synthesis of mind and matter, this essential cerebration goes on prior to any Darwinian selection. A published, peer reviewed edition appears in Biology Direct (10/69, 2015).

Understanding how the structure of community interactions is modified by coevolution is vital for understanding system responses to change at all scales. However, in absence of a group selection process, collective community behaviours cannot be organised or adapted in a Darwinian sense. An open question thus persists: are there alternative organising principles that enable us to understand how coevolution of component species creates complex collective behaviours exhibited at the community level? We address this issue using principles from connectionist learning, a discipline with well-developed theories of emergent behaviours in simple networks. We identify conditions where selection on ecological interactions is equivalent to 'unsupervised learning' (a simple type of connectionist learning) and observe that this enables communities to self organize without community-level selection. Despite not being a Darwinian unit, ecological communities can behave like connectionist learning systems, creating internal organisation that habituates to past environmental conditions and actively recalling those conditions. (Abstract)

Conclusions This work highlights the potential of connectionist theory to expand our understanding of evo-eco dynamics and collective ecological behaviours. Within this framework we find that, despite not being a Darwinian unit, ecological communities can behave like connectionist learning systems, creating internal conditions that habituate to past environmental conditions and actively recalling those conditions.

In this article we investigate how systems above the Darwinian levels of selection may evolve collective behaviours, and observe a deep homology with emergent properties well understood in connectionist models of machine learning. (2) The main contribution of this paper is the equivalence of individual natural selection acting on inter-species interactions with a simple associative learning rule such as Hebbian learning. Thus ecological networks evolve like neural networks learn. (8) The assembly rules of a community can self-organise to recreate past environmental states. (16) Ecological communities can exhibit organised collective behaviours. Under certain conditions, memories of past ecological states can be stored in a distributed way in the organization of evolved ecological relationships. (19)

Sole, Ricard, et al. Liquid Brains, Solid Brains: How Distributed Cognitive Architectures Process Information. Philosophical Transactions of the Royal Society B. 374/20190040, 2019. With Melanie Moses and Stephanie Moses, an introduction to papers from a December 2017 Santa Fe Institute seminar with this title, which represents how many domains across life’s evolution universe to human can actually be perceived as some manner of neural-like intelligent process. We note, for example, Metabolic Basis of Brain-like Electrical Signalling in Bacterial Communities, Plant Behavior in Response to the Environment (Duran-Nebreda & Bassel herein), Statistical Physics of Liquid Brains (Pinero & Sole), The Compositional Stance in Biology, and A Brief History of Liquid Computers. A tacit theme for this work is a further major evolutionary transition in individuality.

Cognitive networks have evolved a broad range of solutions to the problem of gathering, storing and responding to information. Some of these networks are describable as static sets of neurons linked in an adaptive web of connections. These are ‘solid’ networks, with a well-defined and physically persistent architecture. Other systems are formed by sets of agents that exchange, store and process information but without persistent connections or move relative to each other in physical space. We refer to these networks that lack stable connections and static elements as ‘liquid’ brains, a category that includes ant and termite colonies, immune systems and some microbiomes and slime moulds. What are the key differences between solid and liquid brains, particularly in their cognitive potential, ability to solve particular problems and environments, and information-processing strategies? To answer this question requires a new, integrative framework. (Abstract)

Suchow, Jordan, et al. Evolution in Mind: Evolutionary Dynamics, Cognitive Processes, and Bayesian Inference. Trends in Cognitive Sciences. 21/7, 2017. UC Berkeley psychologists including Thomas Griffiths find a robust theoretical affinity between organismic complexities and their relative knowledge content by way of iterative algorithmic optimizations. Over life’s evolution, creatures become smarter, learn more, gain better memories, which culminates, it might seem or even be aimed at, in our collaborative retro-recognition.

Evolutionary theory describes the dynamics of population change in settings affected by reproduction, selection, mutation, and drift. In the context of human cognition, evolutionary theory is most often invoked to explain the origins of capacities such as language, metacognition, and spatial reasoning, framing them as functional adaptations to an ancestral environment. However, evolutionary theory is useful for understanding the mind in a second way: as a mathematical framework for describing evolving populations of thoughts, ideas, and memories within a single mind. In fact, deep correspondences exist between the mathematics of evolution and of learning, with perhaps the deepest being an equivalence between certain evolutionary dynamics and Bayesian inference. This equivalence permits reinterpretation of evolutionary processes as algorithms for Bayesian inference and has relevance for understanding diverse cognitive capacities, including memory and creativity. (Abstract)

Correspondences between the mathematics of evolution and learning permit reinterpretation of evolutionary processes as algorithms for Bayesian inference. Cognitive processes such as memory maintenance and creativity can be formally modelled using the framework of evolutionary dynamics. (Trends)

Valiant, Leslie. Probably Approximately Correct: Nature's Algorithms for Learning and Prospering in a Complex World. New York: Basic Books, 2013. Reported more in Natural Algorithms, and cited in the lead Kate Douglas article, a Harvard University mathematician broaches an innovative view of life’s advance as due to interactions between creatures and their environment. This novel evolution is a learning process that proceeds by way of algorithmic computations, which in this sense are dubbed “ecorithms” as they try to reach a sufficient optimum. In so doing, Valiant joins current perceptions from Bayesian, rationality (Okasha), Markov, genetic, approaches that view not only Darwinian evolution as a computational learning mechanism, but seem to imply the entire universe is some optimization or maximization process.

Watson, Richard and Eors Szathmary. How Can Evolution Learn? Trends in Ecology & Evolution. 31/2, 2016. A companion survey to the Evolutionary Connectionism paper by Watson, et al, cited herein, that adds exposition to this well researched conceptual identity between a Darwinian interpretation and complex neural systems phenomena. By this view, an emergent evolution can be seen to proceed in a similar way to how brains gain and employ knowledge. In both cases, new experience (mutation, environmental changes, better information) is referred to and compared with some manner of stored memory to reach an appropriate response, be it organic form or group behavior. Life’s developmental course (evo-devo) is thus akin to and a mode of cerebral comprehension. As the glossary quotes convey, the major transitions scale can be seen to affirm an ascendance of individual self-realization.

These contributions, published in primary journals, augur for a novel evolutionary synthesis based on algorithmic and connectome dynamics unknown much earlier. But further consideration is in order. The phrase “Darwinian Machine” is still used as an overall description, the issue of what kind of conducive cosmic milieu must be in place for life to occur, arise, develop, learn, and attain self-individuation is hardly engaged. As the Algorithm definition reflects, while a program-like source is entered, it remains an external agency attributed to Darwinian variation and selection. Other members of this extended group, such as Erick Chaistain, et al in Algorithms, Games, and Evolution herein, do allow a prior, internal code, at work self-organizing prior to subsequent influences. But a historic revision which joins several other 2010s advances so as to confirm Charles Darwin’s actual “universal gestation” remains to be achieved.

The theory of evolution links random variation and selection to incremental adaptation. In a different intellectual domain, learning theory links incremental adaptation (e.g., from positive and/or negative reinforcement) to intelligent behaviour. Specifically, learning theory explains how incremental adaptation can acquire knowledge from past experience and use it to direct future behaviours toward favourable outcomes. Until recently such cognitive learning seemed irrelevant to the ‘uninformed’ process of evolution. In our opinion, however, new results formally linking evolutionary processes to the principles of learning might provide solutions to several evolutionary puzzles – the evolution of evolvability, the evolution of ecological organisation, and evolutionary transitions in individuality. If so, the ability for evolution to learn might explain how it produces such apparently intelligent designs. (Abstract)

Trends: A simple analogy between learning and evolution is common and intuitive. But recently, work demonstrating a deeper unification has been expanding rapidly. Formal equivalences have been shown between learning and evolution in several different scenarios, including: selection in asexual and sexual populations with Bayesian learning, the evolution of genotype – phenotype maps with correlation learning, evolving gene regulation networks with neural network learning, and the evolution of ecological relationships with distributed memory models. This unification suggests that evolution can learn in more sophisticated ways than previously realised and offers new theoretical approaches to tackling evolutionary puzzles such as the evolution of evolvability, the evolution of ecological organisations, and the evolution of Darwinian individuality.

Algorithm: a self-contained step-by-step set of instructions describing a process, mechanism, or function. An algorithmic description of a mechanism is sufficiently abstract to be ‘multiply realisable’– i.e., it maybe instantiated or implemented in different physical substrates (e.g., biological, computational, mechanical) whilst producing the same results. For example, Darwin's account of evolutionary adaptation (via repeated applications of variation, selection, and inheritance) is fundamentally algorithmic and hence encompasses
many possible instantiations.

Evolutionary connectionism: a developing theory for the evolution of biological organisation based on the hypothesis that the positive feedback between network topology and behaviour, well understood in neural network models (e.g., Hebbian learning), is common to the evolution of developmental, ecological, and reproductive organizations

Major evolutionary transitions: evolutionary innovations that have changed the evolutionary unit (the level of biological organisation that exhibits heritable variation in reproductive success): from self-replicating molecules, tochromosomes, to simple cells, tomultiorganelle eukaryote cells, to multicellular organisms, to social groups.

Evo-Ego: The Evolution of Individuality and Deep Correlation Learning: In major evolutionary transitions, entities that were capable of independent replication before the transition can replicate only as part of a larger whole after the transition. These transitions in individuality involve the evolution of new mechanisms of inheritance or reproductive codispersal (e.g., vertical genetic transmission, compartmentalisation, reproductive linkage)
that create new evolutionary units.

Watson, Richard and Eors Szathmary. How Can Evolution Learn? – A Reply to Responses. Trends in Ecology and Evolution. Online October, 2015. The authors of an article in the journal (31/2, search) with this title review comments, also online October, by Marion Blute, Adi Livnat and Christos Papadimitriou, Ferenc Jordan, and Indre Zliobaite and Nils Stenseth. As evident by the New Scientist cover story that leads this section, their innovative view, with colleagues, of a computational, neural network, deep learning evolutionary essence seems to be gaining a steady credence. We quote a good capsule from the Zliobaite, University of Helsinki geoinformatics, and Stenseth, University of Oslo, evolutionary biology, paper.

Watson and Szathmáry have presented an intriguing idea linking evolution through natural selection to algorithmic learning. They argue that algorithmic learning can provide models for evolutionary processes; specifically, that evolution of evolvability is similar to generalization in supervised learning (neural networks as an example), that evolution of ecological organization is analogous to unsupervised learning (clustering), and that evolutionary transitions in individuality are comparable to specific forms of deep learning, namely learning of multilayered model structures.

Watson, Richard, et al. Evolutionary Connectionism: Algorithmic Principles Underlying the Evolution of Biological Organisation in Evo-Devo, Evo-Eco and Evolutionary Transitions. Evolutionary Biology. Online December, 2015. Since 2012 and earlier, a creative team effort based in Watson’s University of Southampton, but widely including Chrisantha Fernando, Eors Szathmary, Gunter Wagner, Kostas Kouvaris, Eric Chastain, (noted herein) and many others, has steadily refined the title concept. In this computational age, a robust correspondence between nuanced Darwinian selection and neural net cognitive learning is increasing evident. A multi-scale emergence of organisms and associations results distinguished by iterative, nested correlations between development, environments, retained experience and appropriate responses. This paper, and a companion How Can Evolution Learn? by Watson and Szathmary, make a strong case for a radical 21st century elaborative synthesis. As the third quote paragraph alludes, a basis for an oriented direction beyond tinker and meander is at last being elucidated.

The mechanisms of variation, selection and inheritance, on which evolution by natural selection depends, are not fixed over evolutionary time. Current evolutionary biology is increasingly focussed on understanding how the evolution of developmental organisations modifies the distribution of phenotypic variation, the evolution of ecological relationships modifies the selective environment, and the evolution of reproductive relationships modifies the heritability of the evolutionary unit. The major transitions in evolution, in particular, involve radical changes in developmental, ecological and reproductive organisations that instantiate variation, selection and inheritance at a higher level of biological organisation.

We then synthesise the findings from a developing body of work that is building a new theoretical approach to these questions by converting well-understood theory and results from models of cognitive learning. Specifically, connectionist models of memory and learning demonstrate how simple incremental mechanisms, adjusting the relationships between individually-simple components, can produce organisations that exhibit complex system-level behaviours and improve the adaptive capabilities of the system. We use the term “evolutionary connectionism” to recognise that, by functionally equivalent processes, natural selection acting on the relationships within and between evolutionary entities can result in organisations that produce complex system-level behaviours in evolutionary systems and modify the adaptive capabilities of natural selection over time. (Abstract excerpts)

The same underlying problem of reciprocal causation is manifested differently in each domain. Whilst it is clear that the products of the Darwinian machine can modify the parameters of its own operation, it is not clear in what way it changes itself and, in particular, whether it is possible that the Darwinian machine changes systematically ‘for the better’, i.e. in a way that facilitates rather than frustrates subsequent adaptation. This problem arises in domain-specific versions: Evo-devo – implications for modifying variability, and long-term evolvability; Evo-eco: modifying the selective context, and ecosystem organization (niches); and Evo-ego: implications for modifying heritability, and the evolution of new evolutionary units. (4-5)

Together the evolution of developmental, ecological and reproductive organisations modifies the mechanisms of variation, selection and inheritance that drive evolution by natural selection. The evolutionary connectionism framework sheds light on how the Darwinian Machine can thereby be rescaled from one level of biological organisation to another. The results thus far demonstrate that connectionist learning principles provide a productive methodological approach to important biological questions and offer numerous new insights that expand our understanding of evolutionary processes. Regardless of how the exact alignment between the evolutionary and learning models discussed in this paper develops with future research, the algorithmic territory covered by learning algorithms is, we argue, the right conceptual territory for developing our understanding of how evolutionary processes change over evolutionary time (Conclusion).

Watson, Richard, et al. The Evolution of Phenotypic Correlations and “Developmental Memory.”. Evolution. 68/4, 2014. As evolutionary theory proceeds to expand and merge with the complexity and computational sciences Watson, with Gunter Wagner, Michaela Pavlicev, Dan Weinreich and Rob Mills, propose that neural networks can be an exemplary, informative model. In this significant paper, life’s development by way of natural selection can be better appreciated as a long iterative learning experience. Just as a brain compares new inputs to prior knowledge, so life evolves by according environmental impacts with prior “genetic distributed associated memories.” A novel view of evolutionary emergence is thus entered, beyond random selection, as another sense that something embryonic and cerebral is going on. A prior paper Global Adaptation in Networks of Selfish Components by Watson, et al (Artificial Life, 17/2, 2011) treats neural nets as complex adaptive systems.

Development introduces structured correlations among traits that may constrain or bias the distribution of phenotypes produced. Moreover, when suitable heritable variation exists, natural selection may alter such constraints and correlations, affecting the phenotypic variation available to subsequent selection. However, exactly how the distribution of phenotypes produced by complex developmental systems can be shaped by past selective environments is poorly understood. Here we investigate the evolution of a network of recurrent nonlinear ontogenetic interactions, such as a gene regulation network, in various selective scenarios. We find that evolved networks of this type can exhibit several phenomena that are familiar in cognitive learning systems. These include formation of a distributed associative memory that can “store” and “recall” multiple phenotypes that have been selected in the past, recreate complete adult phenotypic patterns accurately from partial or corrupted embryonic phenotypes, and “generalize” (by exploiting evolved developmental modules) to produce new combinations of phenotypic features. We show that these surprising behaviors follow from an equivalence between the action of natural selection on phenotypic correlations and associative learning, well-understood in the context of neural networks. This helps to explain how development facilitates the evolution of high-fitness phenotypes and how this ability changes over evolutionary time. (Abstract)

That is, the direction of selective pressures on individual relational loci described above has the same relationship with a selective environment that the direction of changes to synaptic connections in a learning neural network has with a training pattern. In other words, gene networks evolve like neural networks learn. Bringing together these two observations with this new in-sight explains the memory behaviors we observe in an evolved network of recurrent nonlinear interactions. That is, a gene network can evolve regulatory interactions that “internalize” a model of past selective environments in just the same way that a learning neural network can store, recall, recognize, and generalize a set of training patterns. (1126)

In this article, we have demonstrated a formal equivalence between the direction of selection on phenotypic correlations and associative learning mechanisms. In the context of neural network research and connectionist models of memory and learning, simple associative learning with the ability to produce an associative memory, to store and recall multiple patterns, categorize patterns from partial or corrupted stimuli, and produce generalized patterns from a set of structurally similar training patterns has been well studied. The insight that the selective pressures on developmental correlations are equivalent to associative learning thus provides the opportunity to utilize well-established theoretical and conceptual frameworks from associative learning theory to identify organizational principles involved in the evolution of development. (1135)

The fact that natural selection can alter the distribution of phenotypic variation, and reflexively, that the distribution of phenotypic variation can alter the selective pressures on subsequent evolutionary change, is an example of “reciprocal causation” in evolution Conceiving evolution as a learning process, rather than a fixed trial and error process, helps to explain how evolution can alter its own mechanisms in this reciprocal sense. Specifically, the equivalence of the selective pressures on ontogenetic interactions with associative learning mechanisms demonstrated here illustrates how evolution can respond to future selection in a manner that is “informed” by past selection in exactly the same sense that cognitive learning systems are informed by past experience. (1136)

Zhu, Jiafan, et al. In the Light of Deep Coalescence: Revisiting Trees Within Networks. arXiv:1606.07350. Rice University computer scientists show how evolutionary histories can take on reticulate phylogenetic topologies which are more realistic then just branching lineages.

Previous   1 | 2 | 3