|
VI. Life’s Cerebral Cognizance Becomes More Complex, Smarter, Informed, Proactive, Self-Aware1. Intelligence Evolution and Knowledge Gain as a Central Course Vanchurin, Vitaly, et al. Towards a Theory of Evolution as Multilevel Learning. arXiv:2110.14602. VV, Yuri Wolf, and Eugene Koonin, US National Library of Medicine, along with Mikhail Katsnelson, Radboud University (search each) continue their expansive, multi-faceted project, which broadly reflects their Russian heritage and holistic worldview. Amongst other contributions, this entry takes the growing sense that life’s sensory development is distinguished by an ascendant cerebral intelligence and cognitive knowledge to its fullest implications. In regard, the major transitions scale can be further tracked by information processing abilities by way of deep neural network methods. An overall vista proceeds from a physical basis by sequential stages to our human phase as some manner of a natural self-discovery. See also a companion paper Thermodynamics of Evolution and the Origin of Life by this team herein and at (2110.15066). We apply the theory of learning to physically renormalizable systems as a way to develop a theory of biological evolution, and the origin of life, as an oriented process of multilevel learning. We formulate seven vital evolutionary principles so to show how they entail replication, natural selection and more and follow naturally from the theory of learning. A further approach uses the mathematical framework of neural networks so to indicate their presence in life’s cognitive advance. evolutionary phenomena. More complex phenomena, such as major transitions in evolution are studied at their thermodynamic limit, which is described in the accompanying paper (V. Vitaly, 2110.15066, herein). (Abstract excerpt) Watson, Richard. Agency, Goal-Directed Behavior, and Part-Whole Relationships in Biological Systems. Biological Theory. November, 2023. The University of Southampton, UK computer scientist continues his decadal endeavor (search) to define, quantify and express this evident turn and realization that all manner of beings as they evolve, gather, learn and emerge, are most distinguished with their ascendant motivation and intelligence. It is also alluded this vectorial quality may also extend to a considerate physical universe. This essay will go on to consider a minimal but concrete notion of agency and goal-directed behavior that are useful for characterizing biological systems at different scales. By so doing, we will join concepts from dynamical systems, combinatorial problem-solving, and connectionist learning with an emphasis on the relationship between parts and wholes. Our focus is on how the agency of a system can be more than the sum of the agency of its parts in terms of a problem-solving competency with respect to resolutions between its parts. This conception of agency helps us think about the ways in which cells, organisms, and perhaps other biological scales, can be more agential than their parts in a quantifiable sense. (Excerpt)
Watson, Richard and Eors Szathmary.
How Can Evolution Learn?
Trends in Ecology & Evolution.
31/2,
2016.
A companion survey to the Evolutionary Connectionism paper by Watson, et al, cited herein, that adds exposition to this well researched conceptual identity between a Darwinian interpretation and complex neural systems phenomena. By this view, an emergent evolution can be seen to proceed in a similar way to how brains gain and employ knowledge. In both cases, new experience (mutation, environmental changes, better information) is referred to and compared with some manner of stored memory to reach an appropriate response, be it organic form or group behavior. Life’s developmental course (evo-devo) is thus akin to and a mode of cerebral comprehension. As the glossary quotes convey, the major transitions scale can be seen to affirm an ascendance of individual self-realization. The theory of evolution links random variation and selection to incremental adaptation. In a different intellectual domain, learning theory links incremental adaptation (e.g., from positive and/or negative reinforcement) to intelligent behaviour. Specifically, learning theory explains how incremental adaptation can acquire knowledge from past experience and use it to direct future behaviours toward favourable outcomes. Until recently such cognitive learning seemed irrelevant to the ‘uninformed’ process of evolution. In our opinion, however, new results formally linking evolutionary processes to the principles of learning might provide solutions to several evolutionary puzzles – the evolution of evolvability, the evolution of ecological organisation, and evolutionary transitions in individuality. If so, the ability for evolution to learn might explain how it produces such apparently intelligent designs. (Abstract) Watson, Richard and Eors Szathmary. How Can Evolution Learn? – A Reply to Responses. Trends in Ecology and Evolution. Online October, 2015. The authors of an article in the journal (31/2, search) with this title review comments, also online October, by Marion Blute, Adi Livnat and Christos Papadimitriou, Ferenc Jordan, and Indre Zliobaite and Nils Stenseth. As evident by the New Scientist cover story that leads this section, their innovative view, with colleagues, of a computational, neural network, deep learning evolutionary essence seems to be gaining a steady credence. We quote a good capsule from the Zliobaite, University of Helsinki geoinformatics, and Stenseth, University of Oslo, evolutionary biology, paper. Watson and Szathmáry have presented an intriguing idea linking evolution through natural selection to algorithmic learning. They argue that algorithmic learning can provide models for evolutionary processes; specifically, that evolution of evolvability is similar to generalization in supervised learning (neural networks as an example), that evolution of ecological organization is analogous to unsupervised learning (clustering), and that evolutionary transitions in individuality are comparable to specific forms of deep learning, namely learning of multilayered model structures. Watson, Richard, et al. Evolutionary Connectionism: Algorithmic Principles Underlying the Evolution of Biological Organisation in Evo-Devo, Evo-Eco and Evolutionary Transitions. Evolutionary Biology. Online December, 2015. Since 2012 and earlier, a creative team effort based in Watson’s University of Southampton, but widely including Chrisantha Fernando, Eors Szathmary, Gunter Wagner, Kostas Kouvaris, Eric Chastain, (noted herein) and many others, has steadily refined the title concept. In this computational age, a robust correspondence between nuanced Darwinian selection and neural net cognitive learning is increasing evident. A multi-scale emergence of organisms and associations results distinguished by iterative, nested correlations between development, environments, retained experience and appropriate responses. This paper, and a companion How Can Evolution Learn? by Watson and Szathmary, make a strong case for a radical 21st century elaborative synthesis. As the third quote paragraph alludes, a basis for an oriented direction beyond tinker and meander is at last being elucidated. The mechanisms of variation, selection and inheritance, on which evolution by natural selection depends, are not fixed over evolutionary time. Current evolutionary biology is increasingly focussed on understanding how the evolution of developmental organisations modifies the distribution of phenotypic variation, the evolution of ecological relationships modifies the selective environment, and the evolution of reproductive relationships modifies the heritability of the evolutionary unit. The major transitions in evolution, in particular, involve radical changes in developmental, ecological and reproductive organisations that instantiate variation, selection and inheritance at a higher level of biological organisation. Watson, Richard, et al. The Evolution of Phenotypic Correlations and “Developmental Memory.”. Evolution. 68/4, 2014. As evolutionary theory proceeds to expand and merge with the complexity and computational sciences Watson, with Gunter Wagner, Michaela Pavlicev, Dan Weinreich and Rob Mills, propose that neural networks can be an exemplary, informative model. In this significant paper, life’s development by way of natural selection can be better appreciated as a long iterative learning experience. Just as a brain compares new inputs to prior knowledge, so life evolves by according environmental impacts with prior “genetic distributed associated memories.” A novel view of evolutionary emergence is thus entered, beyond random selection, as another sense that something embryonic and cerebral is going on. A prior paper Global Adaptation in Networks of Selfish Components by Watson, et al (Artificial Life, 17/2, 2011) treats neural nets as complex adaptive systems. Development introduces structured correlations among traits that may constrain or bias the distribution of phenotypes produced. Moreover, when suitable heritable variation exists, natural selection may alter such constraints and correlations, affecting the phenotypic variation available to subsequent selection. However, exactly how the distribution of phenotypes produced by complex developmental systems can be shaped by past selective environments is poorly understood. Here we investigate the evolution of a network of recurrent nonlinear ontogenetic interactions, such as a gene regulation network, in various selective scenarios. We find that evolved networks of this type can exhibit several phenomena that are familiar in cognitive learning systems. These include formation of a distributed associative memory that can “store” and “recall” multiple phenotypes that have been selected in the past, recreate complete adult phenotypic patterns accurately from partial or corrupted embryonic phenotypes, and “generalize” (by exploiting evolved developmental modules) to produce new combinations of phenotypic features. We show that these surprising behaviors follow from an equivalence between the action of natural selection on phenotypic correlations and associative learning, well-understood in the context of neural networks. This helps to explain how development facilitates the evolution of high-fitness phenotypes and how this ability changes over evolutionary time. (Abstract) Zhu, Jiafan, et al. In the Light of Deep Coalescence: Revisiting Trees Within Networks. arXiv:1606.07350. Rice University computer scientists show how evolutionary histories can take on reticulate phylogenetic topologies which are more realistic then just branching lineages.
|
||||||||||||||||||||||||||||||||||||||||||||||
HOME |
TABLE OF CONTENTS |
Introduction |
GENESIS VISION |
LEARNING PLANET |
ORGANIC UNIVERSE |