(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

V. Life's Evolutionary Development Organizes Itself: A 2020s Genesis Synthesis

D. Cognitive Intelligence and Relative Knowledge as a Central Advance

Livnat, Adi. Simplification, Innateness, and the Absorption of Meaning from Context. Evolutionary Biology. Online March, 2017. Reviewed more in Systems Evolution, the University of Haifa theorist continues his project (search) to achieve a better explanation of life’s evolution by way of algorithmic computations, innate network propensities, genome – language affinities, neural net deep learning, and more.

Ng, Eden Tian Hwa and Akira Kinjo. Computational Modelling of Plasticity-Led Evolution. arXiv:2208.00649. In our late day of a global knowsphere whose composite EarthKinder contents are commonly accessible, University of Brunei, Darussalam, Island of Borneo biomathematicians advance and finesse alternative views upon how life might have arisen along with an especial educative capacity. By this vista, into the 2020s a natural, necessary affinity between genomic and cerebral networks and functions is becoming newly apparent.

In a plasticity-led evolution, a change in the environment induces novel traits via phenotypic variability, after which such traits are genetically accommodated. While this hypothesis is served by experimental findings, here we propose computational methods to gain insight into underlying mechanisms. These models include the developmental process and gene-environment interactions, along with genetics and selection. Our results apply to such GRNs which represent consolidate the criteria of plasticity-led evolution. Since gene regulatory networks are mathematically equivalent to artificial recurrent neural networks, we discuss their analogies and discrepancies to help understand the mechanisms underlying plasticity-led evolution.

We remark that the Wagner model (Does Evolutionary Plasticity Evolve? Evolution. 50/3, 1996) is mathematically equivalent to a recurrent neural network (RNN). Numerous authors have also pointed out analogies between evolution and learning. RNNs process input data to give predictions as outputs. In development, GRNs process environmental cues to express phenotypes as outputs. However, the learning process itself differs between RNNs and GRNs. On the one hand, RNNs are often trained by changing their network parameters in order to minimize the prediction error, which bears similarity to Lamarckian evolution. (9)

Oudeyer, Pierre-Yves and Linda Smith. How Evolution May Work Through Curiosity-Driven Development Process. Topics in Cognitive Science. 8/2, 2016. The authors are a French Institute for Research in Computer Science (INRIA) director and the Indiana University neuropsychologist who was a 1990s founder with the late Evelyn Thelen of developmental systems theories for infant and child maturation. We include the contribution in this new section as another example of deep parallels between life’s long course to ourselves and intrinsic natural propensities to advance in cognitive learning capabilities. See Oudeyer’s publication page for more articles on this self-starter view.

Infants' own activities create and actively select their learning experiences. Here we review recent models of embodied information seeking and curiosity-driven learning and show that these mechanisms have deep implications for development and evolution. We discuss how these mechanisms yield self-organized epigenesis with emergent ordered behavioral and cognitive developmental stages. We describe a robotic experiment that explored the hypothesis that progress in learning, in and for itself, generates intrinsic rewards: The robot learners probabilistically selected experiences according to their potential for reducing uncertainty. In these experiments, curiosity-driven learning led the robot learner to successively discover object affordances and vocal interaction with its peers. We explain how a learning curriculum adapted to the current constraints of the learning system automatically formed, constraining learning and shaping the developmental trajectory. The observed trajectories in the robot experiment share many properties with those in infant development, including a mixture of regularities and diversities in the developmental patterns. Finally, we argue that such emergent developmental structures can guide and constrain evolution, in particular with regard to the origins of language. (Abstract)

Pinero, Jordi and Ricard Sole. Statistical Physics of Liquid Brains. Philosophical Transactions of the Royal Society B. 374/20180376, 2018. In a special Liquid Brains, Solid Brains issue (search Forrest), Institut de Biologia Evolutiva, Universitat Pompeu Fabra, Barcelona theorists consider a universal recurrence in kind of the same generic complex network system across natural and social domains. While akin to genomes and ecosystems, an apt model is cerebral cognition, broadly conceived, by way of agental neurons and synaptic links in multiplex arrays. A prime attribute is a cross-conveyance of intelligence and information, aka biological computation, which is how animal groupings from invertebrates to mammals to people achieve a collective decision-making.

Liquid neural networks (or ‘liquid brains’) are a widespread class of cognitive living networks characterized by a common feature: the agents move in space. Thus, no fixed, long-term agent-agent connections are maintained, in contrast with standard neural systems. How is this class of systems capable of displaying cognitive abilities, from learning to decision-making? In this paper, the collective dynamics, memory and learning properties of liquid brains is explored under the perspective of statistical physics. We review the generic properties of three large classes of systems, namely: standard neural networks (solid brains), ant colonies and the immune system. It is shown that, despite their intrinsic differences, these systems share key properties with neural systems in terms of formal descriptions, but depart in other ways. (Abstract excerpt)

Powell, Russell, et al. Convergent Minds: The Evolution of Cognitive Complexity in Nature. Interface Focus. 7/3, 2017. Powell, Boston University philosophy of science, Irina Mikhalevich, Humboldt University cognitive philosophy, Corina Logan, Cambridge University zoology, and Nicola Clayton, Cambridge University animal psychology, introduce a special issue on this title subject. The paper opens with a recount of Steven Jay Gould’s 1990 claim that because, as he held no innate drive or direction exists, if the tape of life’s course were played over sapient human beings would not appear. It then goes on to note that many research findings into the 2010s strongly attest to the opposite conclusion. Typical entries are Evolutionary Convergence and Biologically Embodied Cognition by Fred Keijzer, The Foundations of Plant Intelligence by Anthony Trewavas, and Is Behavioral Flexibility Evidence of Cognitive Complexity? by Irina Mikhalevich, et al. As this site documents (e.g. Conway Morris, McGhee), across genomic and metabolic phases to especially cerebral qualities, a constant repetition of forms and capabilities does occur, which traces a constrained emergence. While prior bias against any path or axis causes a hesitancy, whose acceptance would require a new theory, these contributions and more allude that life and mind appear to know where they are going, and how to get there.

Power, Daniel, et al. What can Ecosystems Learn? Expanding Evolutionary Ecology with Learning Theory. arXiv:1506.06374. This posting is a follow up to a 2014 paper The Evolution of Phenotypic Correlations and “Development Memory” in Evolution (68/4, Richard Watson) that introduced affinities between life’s ascent and cerebral cognition. The interdisciplinary authors from the UK across Europe, including Watson and Eors Szathmary, continue and expand upon a novel perception of life’s developmental emergence as a dynamic ecological coherence via a connectionist cognition of self-organizing neural networks. By this “deep homology,” prior evolutionary and ecosystem “memories” accrue with which new experience can then be accommodated. In a synthesis of mind and matter, this essential cerebration goes on prior to any Darwinian selection. A published, peer reviewed edition appears in Biology Direct (10/69, 2015).

Understanding how the structure of community interactions is modified by coevolution is vital for understanding system responses to change at all scales. However, in absence of a group selection process, collective community behaviours cannot be organised or adapted in a Darwinian sense. An open question thus persists: are there alternative organising principles that enable us to understand how coevolution of component species creates complex collective behaviours exhibited at the community level? We address this issue using principles from connectionist learning, a discipline with well-developed theories of emergent behaviours in simple networks. We identify conditions where selection on ecological interactions is equivalent to 'unsupervised learning' (a simple type of connectionist learning) and observe that this enables communities to self organize without community-level selection. Despite not being a Darwinian unit, ecological communities can behave like connectionist learning systems, creating internal organisation that habituates to past environmental conditions and actively recalling those conditions. (Abstract)

Conclusions This work highlights the potential of connectionist theory to expand our understanding of evo-eco dynamics and collective ecological behaviours. Within this framework we find that, despite not being a Darwinian unit, ecological communities can behave like connectionist learning systems, creating internal conditions that habituate to past environmental conditions and actively recalling those conditions.

In this article we investigate how systems above the Darwinian levels of selection may evolve collective behaviours, and observe a deep homology with emergent properties well understood in connectionist models of machine learning. (2) The main contribution of this paper is the equivalence of individual natural selection acting on inter-species interactions with a simple associative learning rule such as Hebbian learning. Thus ecological networks evolve like neural networks learn. (8) The assembly rules of a community can self-organise to recreate past environmental states. (16) Ecological communities can exhibit organised collective behaviours. Under certain conditions, memories of past ecological states can be stored in a distributed way in the organization of evolved ecological relationships. (19)

Sole, Ricard, et al. Liquid Brains, Solid Brains: How Distributed Cognitive Architectures Process Information. Philosophical Transactions of the Royal Society B. 374/20190040, 2019. With Melanie Moses and Stephanie Moses, an introduction to papers from a December 2017 Santa Fe Institute seminar with this title, which represents how many domains across life’s evolution universe to human can actually be perceived as some manner of neural-like intelligent process. We note, for example, Metabolic Basis of Brain-like Electrical Signalling in Bacterial Communities, Plant Behavior in Response to the Environment (Duran-Nebreda & Bassel herein), Statistical Physics of Liquid Brains (Pinero & Sole), The Compositional Stance in Biology, and A Brief History of Liquid Computers. A tacit theme for this work is a further major evolutionary transition in individuality.

Cognitive networks have evolved a broad range of solutions to the problem of gathering, storing and responding to information. Some of these networks are describable as static sets of neurons linked in an adaptive web of connections. These are ‘solid’ networks, with a well-defined and physically persistent architecture. Other systems are formed by sets of agents that exchange, store and process information but without persistent connections or move relative to each other in physical space. We refer to these networks that lack stable connections and static elements as ‘liquid’ brains, a category that includes ant and termite colonies, immune systems and some microbiomes and slime moulds. What are the key differences between solid and liquid brains, particularly in their cognitive potential, ability to solve particular problems and environments, and information-processing strategies? To answer this question requires a new, integrative framework. (Abstract)

Suchow, Jordan, et al. Evolution in Mind: Evolutionary Dynamics, Cognitive Processes, and Bayesian Inference. Trends in Cognitive Sciences. 21/7, 2017. UC Berkeley psychologists including Thomas Griffiths find a robust theoretical affinity between organismic complexities and their relative knowledge content by way of iterative algorithmic optimizations. Over life’s evolution, creatures become smarter, learn more, gain better memories, which culminates, it might seem or even be aimed at, in our collaborative retro-recognition.

Evolutionary theory describes the dynamics of population change in settings affected by reproduction, selection, mutation, and drift. In the context of human cognition, evolutionary theory is most often invoked to explain the origins of capacities such as language, metacognition, and spatial reasoning, framing them as functional adaptations to an ancestral environment. However, evolutionary theory is useful for understanding the mind in a second way: as a mathematical framework for describing evolving populations of thoughts, ideas, and memories within a single mind. In fact, deep correspondences exist between the mathematics of evolution and of learning, with perhaps the deepest being an equivalence between certain evolutionary dynamics and Bayesian inference. This equivalence permits reinterpretation of evolutionary processes as algorithms for Bayesian inference and has relevance for understanding diverse cognitive capacities, including memory and creativity. (Abstract)

Correspondences between the mathematics of evolution and learning permit reinterpretation of evolutionary processes as algorithms for Bayesian inference. Cognitive processes such as memory maintenance and creativity can be formally modelled using the framework of evolutionary dynamics. (Trends)

Szilagyi, Andras, et al. Phenotypes to Remember: Evolutionary Developmental Memory Capacity and Robustness. PLoS Computational Biology. November 30, 2020. The analogy between neural and genetic regulatory networks is not superficial in that it allows knowledge transfer between fields that used to be developed separately from each other. Centre for Ecological Research, Tihany, Hungary researchers including Eors Szathmary post a latest implication that life’s course has an main orientation toward intelligence and cognition. In regard, genotype-phenotype mappings are a “highly non-linear process” which are seen to advance by reference to prior represented experiences. See also The World as a Neural Network by the cosmologist Vitaly Vanchurin (2020).

The development of individual organisms from embryo to adult state is under the control of many genes. During development the initially active genes activate other genes, which in turn change the composition of regulatory elements. The behavior of genetic regulatory systems shows similarities to neural networks, such as developmental memory, the ability to quickly adapt to environments that have occurred in the past. We investigated the properties of this system; the developmental pathways that can be “memorized”, and its robustness against disturbances affecting either the embryo state or the gene interaction networks. (Summary excerpt)

Valiant, Leslie. Probably Approximately Correct: Nature's Algorithms for Learning and Prospering in a Complex World. New York: Basic Books, 2013. Reported more in Natural Algorithms, and cited in the lead Kate Douglas article, a Harvard University mathematician broaches an innovative view of life’s advance as due to interactions between creatures and their environment. This novel evolution is a learning process that proceeds by way of algorithmic computations, which in this sense are dubbed “ecorithms” as they try to reach a sufficient optimum. In so doing, Valiant joins current perceptions from Bayesian, rationality (Okasha), Markov, genetic, approaches that view not only Darwinian evolution as a computational learning mechanism, but seem to imply the entire universe is some optimization or maximization process.

Vanchurin, Vitaly, et al. Towards a Theory of Evolution as Multilevel Learning. arXiv:2110.14602. VV, Yuri Wolf, and Eugene Koonin, US National Library of Medicine, along with Mikhail Katsnelson, Radboud University (search each) continue their expansive, multi-faceted project, which broadly reflects their Russian heritage and holistic worldview. Amongst other contributions, this entry takes the growing sense that life’s sensory development is distinguished by an ascendant cerebral intelligence and cognitive knowledge to its fullest implications. In regard, the major transitions scale can be further tracked by information processing abilities by way of deep neural network methods. An overall vista proceeds from a physical basis by sequential stages to our human phase as some manner of a natural self-discovery. See also a companion paper Thermodynamics of Evolution and the Origin of Life by this team herein and at (2110.15066).

We apply the theory of learning to physically renormalizable systems as a way to develop a theory of biological evolution, and the origin of life, as an oriented process of multilevel learning. We formulate seven vital evolutionary principles so to show how they entail replication, natural selection and more and follow naturally from the theory of learning. A further approach uses the mathematical framework of neural networks so to indicate their presence in life’s cognitive advance. evolutionary phenomena. More complex phenomena, such as major transitions in evolution are studied at their thermodynamic limit, which is described in the accompanying paper (V. Vitaly, 2110.15066, herein). (Abstract excerpt)

Modern evolutionary theory gives a detailed quantitative description of processes that occur within evolving populations of organisms, but its transitional scale and the emergence of multiple complex levels remain less understood. Here we establish a correspondence between the key features of evolution, renormalized physical theories and learning dynamics. In this way life’s development can be gain a unified mathematical framework which emphasizes intelligence and knowledge content. Under this theory the same learning phenomena occurs on multiple levels or on different scales, similar to the case of renormalizable physical theories. (Significance excerpt)

Watson, Richard and Eors Szathmary. How Can Evolution Learn? Trends in Ecology & Evolution. 31/2, 2016. A companion survey to the Evolutionary Connectionism paper by Watson, et al, cited herein, that adds exposition to this well researched conceptual identity between a Darwinian interpretation and complex neural systems phenomena. By this view, an emergent evolution can be seen to proceed in a similar way to how brains gain and employ knowledge. In both cases, new experience (mutation, environmental changes, better information) is referred to and compared with some manner of stored memory to reach an appropriate response, be it organic form or group behavior. Life’s developmental course (evo-devo) is thus akin to and a mode of cerebral comprehension. As the glossary quotes convey, the major transitions scale can be seen to affirm an ascendance of individual self-realization.

These contributions, published in primary journals, augur for a novel evolutionary synthesis based on algorithmic and connectome dynamics unknown much earlier. But further consideration is in order. The phrase “Darwinian Machine” is still used as an overall description, the issue of what kind of conducive cosmic milieu must be in place for life to occur, arise, develop, learn, and attain self-individuation is hardly engaged. As the Algorithm definition reflects, while a program-like source is entered, it remains an external agency attributed to Darwinian variation and selection. Other members of this extended group, such as Erick Chaistain, et al in Algorithms, Games, and Evolution herein, do allow a prior, internal code, at work self-organizing prior to subsequent influences. But a historic revision which joins several other 2010s advances so as to confirm Charles Darwin’s actual “universal gestation” remains to be achieved.

The theory of evolution links random variation and selection to incremental adaptation. In a different intellectual domain, learning theory links incremental adaptation (e.g., from positive and/or negative reinforcement) to intelligent behaviour. Specifically, learning theory explains how incremental adaptation can acquire knowledge from past experience and use it to direct future behaviours toward favourable outcomes. Until recently such cognitive learning seemed irrelevant to the ‘uninformed’ process of evolution. In our opinion, however, new results formally linking evolutionary processes to the principles of learning might provide solutions to several evolutionary puzzles – the evolution of evolvability, the evolution of ecological organisation, and evolutionary transitions in individuality. If so, the ability for evolution to learn might explain how it produces such apparently intelligent designs. (Abstract)

Trends: A simple analogy between learning and evolution is common and intuitive. But recently, work demonstrating a deeper unification has been expanding rapidly. Formal equivalences have been shown between learning and evolution in several different scenarios, including: selection in asexual and sexual populations with Bayesian learning, the evolution of genotype – phenotype maps with correlation learning, evolving gene regulation networks with neural network learning, and the evolution of ecological relationships with distributed memory models. This unification suggests that evolution can learn in more sophisticated ways than previously realised and offers new theoretical approaches to tackling evolutionary puzzles such as the evolution of evolvability, the evolution of ecological organisations, and the evolution of Darwinian individuality.

Algorithm: a self-contained step-by-step set of instructions describing a process, mechanism, or function. An algorithmic description of a mechanism is sufficiently abstract to be ‘multiply realisable’– i.e., it maybe instantiated or implemented in different physical substrates (e.g., biological, computational, mechanical) whilst producing the same results. For example, Darwin's account of evolutionary adaptation (via repeated applications of variation, selection, and inheritance) is fundamentally algorithmic and hence encompasses
many possible instantiations.

Evolutionary connectionism: a developing theory for the evolution of biological organisation based on the hypothesis that the positive feedback between network topology and behaviour, well understood in neural network models (e.g., Hebbian learning), is common to the evolution of developmental, ecological, and reproductive organizations

Major evolutionary transitions: evolutionary innovations that have changed the evolutionary unit (the level of biological organisation that exhibits heritable variation in reproductive success): from self-replicating molecules, tochromosomes, to simple cells, tomultiorganelle eukaryote cells, to multicellular organisms, to social groups.

Evo-Ego: The Evolution of Individuality and Deep Correlation Learning: In major evolutionary transitions, entities that were capable of independent replication before the transition can replicate only as part of a larger whole after the transition. These transitions in individuality involve the evolution of new mechanisms of inheritance or reproductive codispersal (e.g., vertical genetic transmission, compartmentalisation, reproductive linkage)
that create new evolutionary units.

Previous   1 | 2 | 3 | 4  Next