(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

VI. Life’s Cerebral Cognizance Becomes More Complex, Smarter, Informed, Proactive, Self-Aware

1. Intelligence Evolution and Knowledge Gain as a Central Course

Dodig-Crnkovic, Gordana. Morphological Computation and Learning to Learn in Natural Intelligent Systems and AI. arXiv:2004.02304. We note because the Chalmers University of Technology, Sweden computer theorist (search) proceeds to identify and enhance an apparently self-educating (autodidactic) natural evolutionary development. In regard, a bioinformatic agency can be seen in effect, which is “a system able to act on its own behalf.” A defining quality of living organisms is to engage in cognitive interactions, which altogether results in nature’s way of composing itself. See also Natural Computational Architectures for Cognitive Info-Communication by G D-C at arXiv:2110.06339 for even further insights.

At present, artificial intelligence in the form of machine learning is making impressive progress, especially the field of deep learning (DL). Deep learning algorithms have been inspired from the beginning by nature, and specifically by the human brain. Learning from nature is a two-way process whereby computing is learning from neuroscience, while neuroscience is quickly adopting information processing models. The question is, what can the inspiration from computational nature at this stage of development contribute to deep learning and how much can models and experiments in machine learning motivate, justify and lead research in neuroscience and cognitive science to practical applications of artificial intelligence. (Abstract)

Human intelligence has two distinct mechanisms of learning – quick, bottom up, from data to patterns (System 1) and slow, top-down from language to objects (System 2) which have been recognized earlier. The starting point of old AI was System 2, symbolic, language, logic-based reasoning, planning and decision making. However, it was without System 1. Now deep learning has grounding for its symbols in the data, but it lacks the System 2 capabilities in order to get to the human-level intelligence and ability of learn and meta-learning, that is learning to learn. (1)

In this article we take primitive cognition to be the totality of processes of self-generation/self-organization, self-regulation and self-maintenance that enables organisms to survive using information from the environment. The understanding of cognition as it appears in degrees of complexity in living nature can help us better understand the step between inanimate and animate matter from the first autocatalytic chemical reactions to the first autopoietic proto-cells. (2)

Duran-Nebreda, Salva and George Bassel. Plant Behavior in Response to the Environment. Philosophical Transactions of the Royal Society B. 374/20190370, 2019. . In a special Liquid Brains, Solid Brains issue (search Forrest), University of Birmingham, UK botanists describe how even floral vegetation can be seen to embody and avail a faculty of cognitive intelligence for their benefit.

Information processing and storage underpins many biological processes of vital importance to organism survival. Like animals, plants also acquire, store and process environmental information relevant to their fitness, and this is particularly evident in their decision-making. The control of plant organ growth and timing of their developmental transitions are carefully orchestrated by the collective action of many connected computing agents, the cells, in what could be addressed as distributed computation. Here, we discuss some examples of biological information processing in plants, with special interest in the connection to formal computational models drawn from theoretical frameworks. (Abstract)

Fernando, Chrisantha. New Research Program: Evolutionary Neurodynamics. www.simons.berkeley.edu/workshops/abstracts/326. The Queen Mary University of London neuroscientist describes the frontiers of brain and cognitive science.

I will give a broad overview of the research program that Prof. Eors Szathmary (Parmenides Foundation, Munich) and I have been carrying out since 2008 on Evolutionary Neurodynamics. Since 2013 this has been a FP-7 FET OPEN Project in collaboration with Luc Steels (UVB), Dario Floreano (EPFL), and Phil Husbands (Sussex). The hypothesis we explore is that some kind of natural selection algorithm is implemented in the brain, with entities that undergo multiplication, with variation and heredity. We have reason to believe that language learning is an evolutionary process occurring during development, in which populations of constructions compete for communicative success. We have reason to believe that during human problem solving, multiple solutions are entertained recombined and mutated in the brain. We have reason to believe that evolutionary methods provide a powerful ensemble approach to combine populations of decomposed and segmented predictive models of the world, policies, and value functions.

Fernando, Chrisantha, et al. Selectionist and Evolutionary Approaches to Brain Function. Frontiers in Computational Neuroscience. 6/Art. 24, 2012. With Eors Szathmary and Phil Husbands, another contribution that articulates the deep affinity of neural activities with life’s long iterative development. As Richard Watson, Hava Siegelmann, John Mayfield, Steven Frank, and increasing number contend, this achieves a 21st century appreciation of how “natural selection” actually applies. While a winnowing optimization toward “good enough to survive” goes on, the discovery of dynamic, learning-like, algorithms can now provide a prior genetic-like guidance.

We consider approaches to brain dynamics and function that have been claimed to be Darwinian. These include Edelman’s theory of neuronal group selection, Changeux’s theory of synaptic selection and selective stabilization of pre-representations, Seung’s Darwinian synapse, Loewenstein’s synaptic melioration, Adam’s selfish synapse, and Calvin’s replicating activity patterns. Except for the last two, the proposed mechanisms are selectionist but not truly Darwinian, because no replicators with information transfer to copies and hereditary variation can be identified in them. Bayesian models and reinforcement learning are formally in agreement with selection dynamics. A classification of search algorithms is shown to include Darwinian replicators (evolutionary units with multiplication, heredity, and variability) as the most powerful mechanism for search in a sparsely occupied search space. Finally, we review our recent attempts to construct and analyze simple models of true Darwinian evolutionary units in the brain in terms of connectivity and activity copying of neuronal groups. (Abstract)

The production of functional molecules is critical for life and also for an increasing proportion of industry. It is also important that genes represent what in cognitive science has been called a “physical symbol system.” Today, the genetic code is an arguably symbolic mapping between nucleotide triplets and amino acids. Moreover, enzymes “know” how to transform a substrate into a product, much like a linguistic rule “knows” how to act on some linguistic constructions to produce others. How can such functionality arise? Combinatorial chemistry is one of the possible approaches. The aim is to generate-and-test a complete library of molecules up to a certain length. (9)

In summary we have distinguished between selectionist and truly Darwinian theories, and have proposed a truly Darwinian theory of Darwinian Neurodynamics. The suggestion that true Darwinian evolution can happen in the brain during, say, complex thinking, or the development of language in children, is ultimately an empirical issue. Three possible outcomes are possible: (i) nothing beyond the synapse level undergoes Darwinian evolution in the brain; (ii) units of evolution will be identified that are very different from our “toy model” suggestions in this paper (and elsewhere); and (iii) some of the units correspond, with more complex details, to our suggested neuronal replicators. (17)

Fernando, Chrisantha, et al. The Neuronal Replicator Hypothesis. Neural Computation. 22/2809, 2010. As the decadal review expresses, in the 21st century, by our collaborative humankind, many cross-fertilizations between topical natural and social fields are underway. Here neuroscientists Fernando, with Richard Goldstein and Eors Szathmary, propose the presence of evolutionary algorithms in cerebral functions which search, improve, and select as we learn and think. See also Evolvable Neuronal Paths: A Novel Basis for Information and Search in the Brain by CF, et al, in PLoS One (6/8, 2011).

We propose that replication (with mutation) of patterns of neuronal activity can occur within the brain using known neurophysiological processes. Thereby evolutionary algorithms implemented by neuronal circuits can play a role in cognition. Replication of structured neuronal representations is assumed in several cognitive architectures. Replicators overcome some limitations of selectionist models of neuronal search. (Abstract)

Hasson, Uri, et al. Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks. Neuron. 105/3, 2020. In another example of cerebral cognition methods being readily applied everywhere, Princeton University neuroscientists point out how brain-based topologies and operations can have analytic utility in many other areas. Section headings include Interpolation and Extrapolation, Generalization Based on Partial and Big Data, and The Power of Adaptive Fit in Evolution. As the quotes allude, parallels can then be drawn between human cerebration and life’s neoDarwinian course whence organisms via sensory apparatus and activities must find a good enough way to survive and evolve. See also A Critique of Pure Learning and What Artificial Neural Networks can Learn from Animal Brains by Anthony Zador in Nature Communications (10/3770, 2019).

Evolution is a blind fitting process by which organisms become adapted to their environment. Does the brain use similar brute-force fitting processes to learn how to perceive and act upon the world? Recent advances in artificial neural networks have exposed the power of optimizing millions of synaptic weights over millions of observations to operate robustly in real-world contexts. These models do not learn simple, human-interpretable rules or representations of the world; rather, they use local computations to interpolate over task-relevant manifolds in a high-dimensional parameter space. Similar to evolutionary processes, over-parameterized models can be simple and parsimonious, as they provide a versatile, robust solution for learning a diverse set of functions. This new family of direct-fit models are a radical challenge to many of the theoretical assumptions in psychology and neuroscience. (Abstract)

Evolution Is an Iterative Optimization Process over Many Generations: Evolution by natural selection is a mindless optimization process by which organisms are adapted over many generations according to environmental constraints. This artistic rendition of the phylogenetic tree highlights how all living organisms on Earth can be traced back to the same ancestral organisms. Humans and other mammals descend from shrew-like mammals that lived 150 million years ago; mammals, birds, reptiles, amphibians, and fish share a common ancestor; and all plants and animals derive from bacteria-like microorganisms that originated more than 3 billion years ago. (Figure 3, evogeneao.com)

Hochberg, Michael, et al. Innovation: An Emerging Focus from Cells to Societies. Philosophical Transactions of the Royal Society B. Vol. 372/Iss. 1736, 2017. Hochberg, University of Montpellier, Pablo Marquet, Santa Fe Institute, Robert Boyd, Arizona State University, and Andreas Wagner, University of Zurich introduce a focus issue with this title. We place its 16 papers by leading researchers and theorists in this section because they attempt to specify an important tendency of life’s developmental evolution to seek behavioral, communal, and artificial novelties for survival and thrival. With a notice of the major transitions scale, an intensifying cumulative culture and intelligent knowledge can be traced from microbe colonies to civilizations. A. Wagner has been an advocate (search), which he reviews in Information Theory, Evolutionary Innovations and Evolvability. Douglas Erwin follows with The Topology of Evolutionary Novelty. Some other entries are The Origin of Heredity in Protocells, Nascent Life Cycles and the Emergence of Higher-Level Individuality, and Innovation and the Growth of Human Population.

This insight into life’s personal and communal cleverness, as the episodic tandem of complexity and sentience arises to our global retrospect, alludes to an quickening individuality. Although not noted, a companion effort is the Open-Ended Creativity school of Walter Banzhaf, Hector Zenil, Sara Walker, Ricard Sole and company, search each. While the ratio of men to women remains 10 to 1, a luminous entry is Innovation and Social Transmission in Experimental Micro-Societies: Exploring the Scope of Cumulative Culture in Young Children by Nicola McGuigan, Emily Burdet, Vanessa Burgess, Lewis Dean, Amanda Lucas, Gillian Vale, and Andrew Whiten. An Abstract for this iconic microcosm is the third quote.

Innovations are generally unexpected, often spectacular changes in phenotypes and ecological functions. The contributions to this theme issue are the latest conceptual, theoretical and experimental developments, addressing how ecology, environment, ontogeny and evolution are central to understanding the complexity of the processes underlying innovations. Here, we set the stage by introducing and defining key terms relating to innovation and discuss their relevance to biological, cultural and technological change. Discovering how the generation and transmission of novel biological information, environmental interactions and selective evolutionary processes contribute to innovation as an ecosystem will shed light on how the dominant features across life come to be, generalize to social, cultural and technological evolution, and have applications in the health sciences and sustainability. (Main Abstract)

How difficult is it to ‘discover’ an evolutionary adaptation or innovation? I here suggest that information theory, in combination with high-throughput DNA sequencing, can help answer this question by quantifying a new phenotype's information content. I apply this framework to compute the phenotypic information associated with novel gene regulation and with the ability to use novel carbon sources. The framework can also help quantify how DNA duplications affect evolvability, estimate the complexity of phenotypes and clarify the meaning of ‘progress’ in Darwinian evolution. (Wagner Abstract)

The experimental study of cumulative culture and the innovations essential to it is a young science, with child studies so rare that the scope of cumulative cultural capacities in childhood remains largely unknown. Here we report a new experimental approach to the inherent complexity of these phenomena. Groups of 3–4-year-old children were presented with an elaborate array of challenges affording the potential cumulative development of a variety of techniques to gain increasingly attractive rewards. We found evidence for elementary forms of cumulative cultural progress, with inventions of solutions at lower levels spreading to become shared innovations, and some children then building on these to create more advanced but more rewarding innovations. This contrasted with markedly more constrained progress when children worked only by themselves, or if groups faced only the highest-level challenges from the start. Our results show children are not merely ‘cultural sponges’, but when acting in groups, display the beginnings of cycles of innovation and observational learning that sustain cumulative progress in problem solving. (McGuigan Abstract)

Khajehabdollahi, Sina and Olaf Witkowski. Critical Learning vs. Evolution. Ikegami, Takashi, et al, eds.. ALIFE 2018 Conference Proceedings. Cambridge: MIT Press, 2018. A select paper from this online volume (Ikegami) by University of Western Ontario and Earth-Life Science Institute, Tokyo biophysicists who seek better insights into life’s quickening sentience by way of inherent complexity principles. By current turns, these dynamic phenomena appear to be increasingly cerebral in kind and function. In an extension of this view, just as brains are found to prefer and reside in a critically poised optimum state, so it seems that so does evolutionary developmental emergence.

Criticality is thought to be crucial for complex systems to adapt, at the boundary between regimes with different dynamics, where the system may transition from one phase to another. Numerous systems, from sandpiles to gene regulatory networks, to swarms and human brains, seem to work towards preserving a precarious balance right at their critical point. Understanding criticality therefore seems strongly related to a broad, fundamental theory for the physics of life as it could be, which still lacks a clear description of how it can arise and maintain itself in complex systems. (Abstract excerpt)

Understanding the utility of criticality in artificial life systems is important for understanding how complexity can self-organize into predictable but adaptive systems. This project applied the methods of critical learning to a community of Ising-embodied organisms subject to evolutionary selection pressures in order to understand how criticality affects the behavior and genotypes of the organisms and how these changes in turn affect the fitness and adaptability of the community. (53)

Olaf Witkowski’s research tackles distributed intelligence in living systems and societies, employing the tools of artificial life, connectionist learning, and information theory, to reach a better understanding of the following triptych of complex phenomena: the emergence of information flows that led to the origins of life, the evolution of intelligence in the major evolutionary transitions, the expansion of communication and cooperation in the future of the bio- and technosphere. (OW website)

Kounios, Loizos, et al. How Evolution Learns to Improve Evolvability on Rugged Fitness Landscapes. arXiv:1612.05955. A contribution to the increasingly explanatory synthesis of cognitive learning theories with life’s developmental emergence. Coauthors include Richard Watson, Gunter Wagner, Jeff Clune and Mihaela Pavlicev.

Kouvaris, Kostas, et al. How Evolution Learns to Generalise. arXiv:1508.06854. A University of Southampton group including Richard Watson, along with Jeff Clune, University of Wyoming, continue the innovative realization that much more than random mutation and selection must be at work for life to become increasingly aware, smart, and cognizant, if to think about it. By this novel insight, a progression of self-organizing neural, connectionist networks are seen to engender a quickening emergence. Due to a “deep homology,” creatures compare and assimilate new experience with a prior corpus of representations, similar to a human brain. A parallel chart of Learning and Evolutionary Theory matches up well, so to reveal a genesis synthesis of universal gestation. A beneficial inference would be the further advent of a worldwise sapiensphere coming to her/his own bicameral knowledge, which indeed is the premise of this website.

One of the most intriguing questions in evolution is how organisms exhibit suitable phenotypic variation to rapidly adapt in novel selective environments which is crucial for evolvability. Recent work showed that when selective environments vary in a systematic manner, it is possible that development can constrain the phenotypic space in regions that are evolutionarily more advantageous. Yet, the underlying mechanism that enables the spontaneous emergence of such adaptive developmental constraints is poorly understood. How can natural selection, given its myopic and conservative nature, favour developmental organisations that facilitate adaptive evolution in future previously unseen environments? Such capacity suggests a form of foresight facilitated by the ability of evolution to accumulate and exploit information not only about the particular phenotypes selected in the past, but regularities in the environment that are also relevant to future environments.

Here we argue that the ability of evolution to discover such regularities is analogous to the ability of learning systems to generalise from past experience. Conversely, the canalisation of evolved developmental processes to past selective environments and failure of natural selection to enhance evolvability in future selective environments is directly analogous to the problem of over-fitting and failure to generalise in machine learning. We show that this analogy arises from an underlying mechanistic equivalence by showing that conditions corresponding to those that alleviate over-fitting in machine learning enhance the evolution of generalised developmental organisations under natural selection. This equivalence provides access to a well-developed theoretical framework that enables us to characterise the conditions where natural selection will find general rather than particular solutions to environmental conditions. (Abstract)

To achieve this we follow previous work on the evolution of development through computer simulations of the evolution of phenotypic correlations based in gene-regulatory network (GRN) models. Such GRN models bear many resemblances to artificial neural networks in machine learning regarding their functionality and structure. Watson et al. demonstrated though that the way regulatory interactions evolve under natural selection is, in fact, equivalent to the way neural networks learn. Accordingly, a GRN evolves a memory of its past selective environments by internalising their statistical correlation structure into its ontogenetic interactions, in the same way that learning neural networks store and recall training patterns. (2)

Lehman, Joel, et al. The Surprising Creativity of Digital Evolution. arXiv:1803.03453. As its Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities subtitle cites, some 50 coauthors who have engaged this project such as Chris Adami, Peter Bentley, Stephanie Forrest, Laurent Keller, Carole Knibbe, Richard Lenski, Hod Lipson, Robert Pennock, Thomas Ray, and Richard Watson offer their personal take. One could then observe that parallel versions of life’s long emergence now exist side by side – an older 19th and 20th century view of random natural selection, or this 21st century model with some manner of a generative source program at prior work. While chance and/or law is in abeyance, the entry scopes out an algorithmic, self-organizing, quickening genesis synthesis in ascendance. As a general conclusion, rather than aimless accident, a temporal, oriented course of open procreativity is traced going forward.

Biological evolution provides a creative fount of complex and subtle adaptations. However, because evolution is an algorithmic process that transcends the substrate in which it occurs, its creativity is not limited to nature. Indeed, many researchers in the field of digital evolution have observed their evolving algorithms and organisms subverting their intentions, producing unexpected adaptations, or exhibiting outcomes uncannily convergent with ones in nature. This paper is the crowd-sourced product of researchers in the fields of artificial life and evolutionary computation who have provided first-hand accounts of such cases. It thus serves as a written, fact-checked collection of scientifically important and even entertaining stories. We also present here substantial evidence that the existence and importance of evolutionary surprises extends beyond the natural world, and may be a universal property of all complex evolving systems. (Abstract excerpt)

Lin, Henry and Max Tegmark. Why does Deep and Cheap Learning Work so Well?. arXiv:1608.08225. The Harvard and MIT polymaths review the recent successes of these neural net, multiscale, algorithmic operations (definitions vary) from a statistical physics context such as renormalization groups and symmetric topologies. The authors collaborated with Tomaso Poggio of the MIT Center for Brains, Minds, and Machines (Google), and others, in their study which could be seen to imply a self-educating genesis cosmos which is trying to decipher, describe, recognize and affirm itself.

We show how the success of deep learning depends not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can be approximated through "cheap learning" with exponentially fewer parameters than generic ones, because they have simplifying properties tracing back to the laws of physics. The exceptional simplicity of physics-based functions hinges on properties such as symmetry, locality, compositionality and polynomial log-probability, and we explore how these properties translate into exceptionally simple neural networks approximating both natural phenomena such as images and abstract representations thereof such as drawings. We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine-learning, a deep neural network can be more efficient than a shallow one. We formalize these claims using information theory and discuss the relation to renormalization group procedures. (Abstract)

Throughout this paper, we will adopt a physics perspective on the problem, to prevent application-specific details from obscuring simple general results related to dynamics, symmetries, renormalization, etc., and to exploit useful similarities between deep learning and statistical mechanics. (1)

Previous   1 | 2 | 3 | 4  Next