(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

V. Systems Evolution: A 21st Century Genesis Synthesis

D. An Evolutionary Intelligence Arises

Fernando, Chrisantha, et al. The Neuronal Replicator Hypothesis. Neural Computation. 22/2809, 2010. As the decadal review expresses, in the 21st century, by our collaborative humankind, many cross-fertilizations between topical natural and social fields are underway. Here neuroscientists Fernando, with Richard Goldstein and Eors Szathmary, propose the presence of evolutionary algorithms in cerebral functions which search, improve, and select as we learn and think. See also Evolvable Neuronal Paths: A Novel Basis for Information and Search in the Brain by CF, et al, in PLoS One (6/8, 2011).

We propose that replication (with mutation) of patterns of neuronal activity can occur within the brain using known neurophysiological processes. Thereby evolutionary algorithms implemented by neuronal circuits can play a role in cognition. Replication of structured neuronal representations is assumed in several cognitive architectures. Replicators overcome some limitations of selectionist models of neuronal search. (Abstract)

Hasson, Uri, et al. Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks. Neuron. 105/3, 2020. In another example of cerebral cognition methods being readily applied everywhere, Princeton University neuroscientists point out how brain-based topologies and operations can have analytic utility in many other areas. Section headings include Interpolation and Extrapolation, Generalization Based on Partial and Big Data, and The Power of Adaptive Fit in Evolution. As the quotes allude, parallels can then be drawn between human cerebration and life’s neoDarwinian course whence organisms via sensory apparatus and activities must find a good enough way to survive and evolve. See also A Critique of Pure Learning and What Artificial Neural Networks can Learn from Animal Brains by Anthony Zador in Nature Communications (10/3770, 2019).

Evolution is a blind fitting process by which organisms become adapted to their environment. Does the brain use similar brute-force fitting processes to learn how to perceive and act upon the world? Recent advances in artificial neural networks have exposed the power of optimizing millions of synaptic weights over millions of observations to operate robustly in real-world contexts. These models do not learn simple, human-interpretable rules or representations of the world; rather, they use local computations to interpolate over task-relevant manifolds in a high-dimensional parameter space. Similar to evolutionary processes, over-parameterized models can be simple and parsimonious, as they provide a versatile, robust solution for learning a diverse set of functions. This new family of direct-fit models are a radical challenge to many of the theoretical assumptions in psychology and neuroscience. (Abstract)

Evolution Is an Iterative Optimization Process over Many Generations: Evolution by natural selection is a mindless optimization process by which organisms are adapted over many generations according to environmental constraints. This artistic rendition of the phylogenetic tree highlights how all living organisms on Earth can be traced back to the same ancestral organisms. Humans and other mammals descend from shrew-like mammals that lived 150 million years ago; mammals, birds, reptiles, amphibians, and fish share a common ancestor; and all plants and animals derive from bacteria-like microorganisms that originated more than 3 billion years ago. (Figure 3, evogeneao.com)

Hochberg, Michael, et al. Innovation: An Emerging Focus from Cells to Societies. Philosophical Transactions of the Royal Society B. Vol. 372/Iss. 1736, 2017. Hochberg, University of Montpellier, Pablo Marquet, Santa Fe Institute, Robert Boyd, Arizona State University, and Andreas Wagner, University of Zurich introduce a focus issue with this title. We place its 16 papers by leading researchers and theorists in this section because they attempt to specify an important tendency of life’s developmental evolution to seek behavioral, communal, and artificial novelties for survival and thrival. With a notice of the major transitions scale, an intensifying cumulative culture and intelligent knowledge can be traced from microbe colonies to civilizations. A. Wagner has been an advocate (search), which he reviews in Information Theory, Evolutionary Innovations and Evolvability. Douglas Erwin follows with The Topology of Evolutionary Novelty. Some other entries are The Origin of Heredity in Protocells, Nascent Life Cycles and the Emergence of Higher-Level Individuality, and Innovation and the Growth of Human Population.

This insight into life’s personal and communal cleverness, as the episodic tandem of complexity and sentience arises to our global retrospect, alludes to an quickening individuality. Although not noted, a companion effort is the Open-Ended Creativity school of Walter Banzhaf, Hector Zenil, Sara Walker, Ricard Sole and company, search each. While the ratio of men to women remains 10 to 1, a luminous entry is Innovation and Social Transmission in Experimental Micro-Societies: Exploring the Scope of Cumulative Culture in Young Children by Nicola McGuigan, Emily Burdet, Vanessa Burgess, Lewis Dean, Amanda Lucas, Gillian Vale, and Andrew Whiten. An Abstract for this iconic microcosm is the third quote.

Innovations are generally unexpected, often spectacular changes in phenotypes and ecological functions. The contributions to this theme issue are the latest conceptual, theoretical and experimental developments, addressing how ecology, environment, ontogeny and evolution are central to understanding the complexity of the processes underlying innovations. Here, we set the stage by introducing and defining key terms relating to innovation and discuss their relevance to biological, cultural and technological change. Discovering how the generation and transmission of novel biological information, environmental interactions and selective evolutionary processes contribute to innovation as an ecosystem will shed light on how the dominant features across life come to be, generalize to social, cultural and technological evolution, and have applications in the health sciences and sustainability. (Main Abstract)

How difficult is it to ‘discover’ an evolutionary adaptation or innovation? I here suggest that information theory, in combination with high-throughput DNA sequencing, can help answer this question by quantifying a new phenotype's information content. I apply this framework to compute the phenotypic information associated with novel gene regulation and with the ability to use novel carbon sources. The framework can also help quantify how DNA duplications affect evolvability, estimate the complexity of phenotypes and clarify the meaning of ‘progress’ in Darwinian evolution. (Wagner Abstract)

The experimental study of cumulative culture and the innovations essential to it is a young science, with child studies so rare that the scope of cumulative cultural capacities in childhood remains largely unknown. Here we report a new experimental approach to the inherent complexity of these phenomena. Groups of 3–4-year-old children were presented with an elaborate array of challenges affording the potential cumulative development of a variety of techniques to gain increasingly attractive rewards. We found evidence for elementary forms of cumulative cultural progress, with inventions of solutions at lower levels spreading to become shared innovations, and some children then building on these to create more advanced but more rewarding innovations. This contrasted with markedly more constrained progress when children worked only by themselves, or if groups faced only the highest-level challenges from the start. Our results show children are not merely ‘cultural sponges’, but when acting in groups, display the beginnings of cycles of innovation and observational learning that sustain cumulative progress in problem solving. (McGuigan Abstract)

Khajehabdollahi, Sina and Olaf Witkowski. Critical Learning vs. Evolution. Ikegami, Takashi, et al, eds.. ALIFE 2018 Conference Proceedings. Cambridge: MIT Press, 2018. A select paper from this online volume (Ikegami) by University of Western Ontario and Earth-Life Science Institute, Tokyo biophysicists who seek better insights into life’s quickening sentience by way of inherent complexity principles. By current turns, these dynamic phenomena appear to be increasingly cerebral in kind and function. In an extension of this view, just as brains are found to prefer and reside in a critically poised optimum state, so it seems that so does evolutionary developmental emergence.

Criticality is thought to be crucial for complex systems to adapt, at the boundary between regimes with different dynamics, where the system may transition from one phase to another. Numerous systems, from sandpiles to gene regulatory networks, to swarms and human brains, seem to work towards preserving a precarious balance right at their critical point. Understanding criticality therefore seems strongly related to a broad, fundamental theory for the physics of life as it could be, which still lacks a clear description of how it can arise and maintain itself in complex systems. (Abstract excerpt)

Understanding the utility of criticality in artificial life systems is important for understanding how complexity can self-organize into predictable but adaptive systems. This project applied the methods of critical learning to a community of Ising-embodied organisms subject to evolutionary selection pressures in order to understand how criticality affects the behavior and genotypes of the organisms and how these changes in turn affect the fitness and adaptability of the community. (53)

Olaf Witkowski’s research tackles distributed intelligence in living systems and societies, employing the tools of artificial life, connectionist learning, and information theory, to reach a better understanding of the following triptych of complex phenomena: the emergence of information flows that led to the origins of life, the evolution of intelligence in the major evolutionary transitions, the expansion of communication and cooperation in the future of the bio- and technosphere. (OW website)

Kounios, Loizos, et al. How Evolution Learns to Improve Evolvability on Rugged Fitness Landscapes. arXiv:1612.05955. A contribution to the increasingly explanatory synthesis of cognitive learning theories with life’s developmental emergence. Coauthors include Richard Watson, Gunter Wagner, Jeff Clune and Mihaela Pavlicev.

Kouvaris, Kostas, et al. How Evolution Learns to Generalise. arXiv:1508.06854. A University of Southampton group including Richard Watson, along with Jeff Clune, University of Wyoming, continue the innovative realization that much more than random mutation and selection must be at work for life to become increasingly aware, smart, and cognizant, if to think about it. By this novel insight, a progression of self-organizing neural, connectionist networks are seen to engender a quickening emergence. Due to a “deep homology,” creatures compare and assimilate new experience with a prior corpus of representations, similar to a human brain. A parallel chart of Learning and Evolutionary Theory matches up well, so to reveal a genesis synthesis of universal gestation. A beneficial inference would be the further advent of a worldwise sapiensphere coming to her/his own bicameral knowledge, which indeed is the premise of this website.

One of the most intriguing questions in evolution is how organisms exhibit suitable phenotypic variation to rapidly adapt in novel selective environments which is crucial for evolvability. Recent work showed that when selective environments vary in a systematic manner, it is possible that development can constrain the phenotypic space in regions that are evolutionarily more advantageous. Yet, the underlying mechanism that enables the spontaneous emergence of such adaptive developmental constraints is poorly understood. How can natural selection, given its myopic and conservative nature, favour developmental organisations that facilitate adaptive evolution in future previously unseen environments? Such capacity suggests a form of foresight facilitated by the ability of evolution to accumulate and exploit information not only about the particular phenotypes selected in the past, but regularities in the environment that are also relevant to future environments.

Here we argue that the ability of evolution to discover such regularities is analogous to the ability of learning systems to generalise from past experience. Conversely, the canalisation of evolved developmental processes to past selective environments and failure of natural selection to enhance evolvability in future selective environments is directly analogous to the problem of over-fitting and failure to generalise in machine learning. We show that this analogy arises from an underlying mechanistic equivalence by showing that conditions corresponding to those that alleviate over-fitting in machine learning enhance the evolution of generalised developmental organisations under natural selection. This equivalence provides access to a well-developed theoretical framework that enables us to characterise the conditions where natural selection will find general rather than particular solutions to environmental conditions. (Abstract)

To achieve this we follow previous work on the evolution of development through computer simulations of the evolution of phenotypic correlations based in gene-regulatory network (GRN) models. Such GRN models bear many resemblances to artificial neural networks in machine learning regarding their functionality and structure. Watson et al. demonstrated though that the way regulatory interactions evolve under natural selection is, in fact, equivalent to the way neural networks learn. Accordingly, a GRN evolves a memory of its past selective environments by internalising their statistical correlation structure into its ontogenetic interactions, in the same way that learning neural networks store and recall training patterns. (2)

Lehman, Joel, et al. The Surprising Creativity of Digital Evolution. arXiv:1803.03453. As its Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities subtitle cites, some 50 coauthors who have engaged this project such as Chris Adami, Peter Bentley, Stephanie Forrest, Laurent Keller, Carole Knibbe, Richard Lenski, Hod Lipson, Robert Pennock, Thomas Ray, and Richard Watson offer their personal take. One could then observe that parallel versions of life’s long emergence now exist side by side – an older 19th and 20th century view of random natural selection, or this 21st century model with some manner of a generative source program at prior work. While chance and/or law is in abeyance, the entry scopes out an algorithmic, self-organizing, quickening genesis synthesis in ascendance. As a general conclusion, rather than aimless accident, a temporal, oriented course of open procreativity is traced going forward.

Biological evolution provides a creative fount of complex and subtle adaptations. However, because evolution is an algorithmic process that transcends the substrate in which it occurs, its creativity is not limited to nature. Indeed, many researchers in the field of digital evolution have observed their evolving algorithms and organisms subverting their intentions, producing unexpected adaptations, or exhibiting outcomes uncannily convergent with ones in nature. This paper is the crowd-sourced product of researchers in the fields of artificial life and evolutionary computation who have provided first-hand accounts of such cases. It thus serves as a written, fact-checked collection of scientifically important and even entertaining stories. We also present here substantial evidence that the existence and importance of evolutionary surprises extends beyond the natural world, and may be a universal property of all complex evolving systems. (Abstract excerpt)

Lin, Henry and Max Tegmark. Why does Deep and Cheap Learning Work so Well?. arXiv:1608.08225. The Harvard and MIT polymaths review the recent successes of these neural net, multiscale, algorithmic operations (definitions vary) from a statistical physics context such as renormalization groups and symmetric topologies. The authors collaborated with Tomaso Poggio of the MIT Center for Brains, Minds, and Machines (Google), and others, in their study which could be seen to imply a self-educating genesis cosmos which is trying to decipher, describe, recognize and affirm itself.

We show how the success of deep learning depends not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can be approximated through "cheap learning" with exponentially fewer parameters than generic ones, because they have simplifying properties tracing back to the laws of physics. The exceptional simplicity of physics-based functions hinges on properties such as symmetry, locality, compositionality and polynomial log-probability, and we explore how these properties translate into exceptionally simple neural networks approximating both natural phenomena such as images and abstract representations thereof such as drawings. We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine-learning, a deep neural network can be more efficient than a shallow one. We formalize these claims using information theory and discuss the relation to renormalization group procedures. (Abstract)

Throughout this paper, we will adopt a physics perspective on the problem, to prevent application-specific details from obscuring simple general results related to dynamics, symmetries, renormalization, etc., and to exploit useful similarities between deep learning and statistical mechanics. (1)

Livnat, Adi. Simplification, Innateness, and the Absorption of Meaning from Context. Evolutionary Biology. Online March, 2017. Reviewed more in Systems Evolution, the University of Haifa theorist continues his project (search) to achieve a better explanation of life’s evolution by way of algorithmic computations, innate network propensities, genome – language affinities, neural net deep learning, and more.

Oudeyer, Pierre-Yves and Linda Smith. How Evolution May Work Through Curiosity-Driven Development Process. Topics in Cognitive Science. 8/2, 2016. The authors are a French Institute for Research in Computer Science (INRIA) director and the Indiana University neuropsychologist who was a 1990s founder with the late Evelyn Thelen of developmental systems theories for infant and child maturation. We include the contribution in this new section as another example of deep parallels between life’s long course to ourselves and intrinsic natural propensities to advance in cognitive learning capabilities. See Oudeyer’s publication page for more articles on this self-starter view.

Infants' own activities create and actively select their learning experiences. Here we review recent models of embodied information seeking and curiosity-driven learning and show that these mechanisms have deep implications for development and evolution. We discuss how these mechanisms yield self-organized epigenesis with emergent ordered behavioral and cognitive developmental stages. We describe a robotic experiment that explored the hypothesis that progress in learning, in and for itself, generates intrinsic rewards: The robot learners probabilistically selected experiences according to their potential for reducing uncertainty. In these experiments, curiosity-driven learning led the robot learner to successively discover object affordances and vocal interaction with its peers. We explain how a learning curriculum adapted to the current constraints of the learning system automatically formed, constraining learning and shaping the developmental trajectory. The observed trajectories in the robot experiment share many properties with those in infant development, including a mixture of regularities and diversities in the developmental patterns. Finally, we argue that such emergent developmental structures can guide and constrain evolution, in particular with regard to the origins of language. (Abstract)

Pinero, Jordi and Ricard Sole. Statistical Physics of Liquid Brains. Philosophical Transactions of the Royal Society B. 374/20180376, 2018. In a special Liquid Brains, Solid Brains issue (search Forrest), Institut de Biologia Evolutiva, Universitat Pompeu Fabra, Barcelona theorists consider a universal recurrence in kind of the same generic complex network system across natural and social domains. While akin to genomes and ecosystems, an apt model is cerebral cognition, broadly conceived, by way of agental neurons and synaptic links in multiplex arrays. A prime attribute is a cross-conveyance of intelligence and information, aka biological computation, which is how animal groupings from invertebrates to mammals to people achieve a collective decision-making.

Liquid neural networks (or ‘liquid brains’) are a widespread class of cognitive living networks characterized by a common feature: the agents move in space. Thus, no fixed, long-term agent-agent connections are maintained, in contrast with standard neural systems. How is this class of systems capable of displaying cognitive abilities, from learning to decision-making? In this paper, the collective dynamics, memory and learning properties of liquid brains is explored under the perspective of statistical physics. We review the generic properties of three large classes of systems, namely: standard neural networks (solid brains), ant colonies and the immune system. It is shown that, despite their intrinsic differences, these systems share key properties with neural systems in terms of formal descriptions, but depart in other ways. (Abstract excerpt)

Powell, Russell, et al. Convergent Minds: The Evolution of Cognitive Complexity in Nature. Interface Focus. 7/3, 2017. Powell, Boston University philosophy of science, Irina Mikhalevich, Humboldt University cognitive philosophy, Corina Logan, Cambridge University zoology, and Nicola Clayton, Cambridge University animal psychology, introduce a special issue on this title subject. The paper opens with a recount of Steven Jay Gould’s 1990 claim that because, as he held no innate drive or direction exists, if the tape of life’s course were played over sapient human beings would not appear. It then goes on to note that many research findings into the 2010s strongly attest to the opposite conclusion. Typical entries are Evolutionary Convergence and Biologically Embodied Cognition by Fred Keijzer, The Foundations of Plant Intelligence by Anthony Trewavas, and Is Behavioral Flexibility Evidence of Cognitive Complexity? by Irina Mikhalevich, et al. As this site documents (e.g. Conway Morris, McGhee), across genomic and metabolic phases to especially cerebral qualities, a constant repetition of forms and capabilities does occur, which traces a constrained emergence. While prior bias against any path or axis causes a hesitancy, whose acceptance would require a new theory, these contributions and more allude that life and mind appear to know where they are going, and how to get there.

Previous   1 | 2 | 3  Next