(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

V. Systems Evolution: A 21st Century Genesis Synthesis

D. An Evolutionary Intelligence Arises

    The image is a cover for a March 26, 2016 issue of the New Scientist wherein features editor Kate Douglas reports on a novel approach to evolutionary theory, which we have noted for some time, about how life seems to develop much akin to a developing brain as it learns relative knowledge. The concept variously arose in Richard Watson’s computer and living systems group at the University of Southampton and has gained advocates such as Eors Szathmary, Gunter Wagner and others. In regard, these bicameral qualities are broadly similar to algorithmic operations as they emerge and proceed. While some randomness is alluded to, an temporal procession is traced by virtue of referring to and building upon prior, remembered experience. The insightful perception suggests, within a systems evolution, that aware cognitive features in some way precede and define a central, quickening ascent.

 
     

2020: Since 2015 we have reported a growing sense that as life evolves it does so by an emphasis on cerebral and cognitive qualities more so than somatic anatomy. The present message is that a developmental gestation seems to know what is doing and where it is going b becoming smarter, better informed, communally coherent and finally our collaborative self-confirming recognition.

Chaistain, Erick, at al. Algorithms, Games, and Evolution. Proceedings of the National Academy of Sciences. 111/10620, 2014.

de Vladar, Harold and Eors Szathmary. Neuronal Boost to Evolutionary Dynamics. Interface Focus. 5/6, 2015.

Hasson, Uri, et al. Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks. Neuron. 105/3, 2020.

Hochberg, Michael, et al.Innovation: An Emerging Focus from Cells to Societies. Philosophical Transactions of the Royal Society B. Vol. 372/Iss. 1736, 2017.

Kouvaris, Kostas, et al. How Evolution Learns to Generalize. arXiv:1508.06854.

Oudeyer, Pierre-Yves and Linda Smith. How Evolution May Work Through Curiosity-Driven Development Process. Topics in Cognitive Science. 8/2, 2016.

Watson, Richard and Eors Szathmary. How Can Evolution Learn? Trends in Ecology & Evolution. 31/2, 2016.

Watson, Richard, et al. Evolutionary Connectionism. Evolutionary Biology. Online December, 2015.

Evolutionary Biology and the Theory of Computing. simons.berkeley.edu/programs/evolution2014. Reported more in Natural Algorithms, this is a Simons Foundation program in the spring of 2014 whose organizers include Christos Papadimitriou and Leslie Valiant. Various seminars were Evolutionary Biology Boot Camp (January tutorial) with Eugene Koonin; Computational Theories of Evolution, Nick Barton organizer, with speakers such as Richard Watson, Chrisantha Fernando, Adi Livnat, and Steven Frank; and New Directions in Probabilistic Models of Evolution.

Azulay, Aharon, et al. The C. elegans Connectome Consists of Homogenous Circuits with Defined Functional Roles. PLoS Computational Biology. Online September, 2016. At the frontier of integral syntheses across cerebral, social, and cognitive evolution, Hebrew University, Jerusalem, geneticists and neuroscientists distill from neural network complexities a common, independent principle which is at work from Internet media and human brains to this multicellular roundworm. To reflect and appreciate, a designated “connectome” can be seen in ramifying effect across life’s episodic emergence. See also in this section The Multilayer Connectome of Caenorhabditis elegans by Bentley, et al for a similar study.

A major goal of systems neuroscience is to decipher the structure-function relationship in neural networks. Here we study network functionality in light of the common-neighbor-rule (CNR) in which a pair of neurons is more likely to be connected the more common neighbors it shares. Focusing on the fully-mapped neural network of C. elegans worms, we establish that the CNR is an emerging property in this connectome. Moreover, sets of common neighbors form homogenous structures that appear in defined layers of the network. Simulations of signal propagation reveal their potential functional roles: signal amplification and short-term memory at the sensory/inter-neuron layer, and synchronized activity at the motoneuron layer supporting coordinated movement. A coarse-grained view of the neural network based on homogenous connected sets alone reveals a simple modular network architecture that is intuitive to understand. These findings provide a novel framework for analyzing larger, more complex, connectomes once these become available. (Abstract)

While the emergence of the CNR in social networks might be intuitive to understand, the benefit of such a design in neural networks is not trivially apparent. In the rat cortex the observed CNR is thought to be an organizing principle that clusters neurons into elementary
building blocks of cortical computations and memory. Our study provides several novel insights to this phenomenon: we established that the CNR is indeed an emerging organizing principle in the C. elegans neural network, and that sets of common neighbor neurons can be viewed as building blocks found in defined layers of the network exerting valuable functional roles (e.g., signal amplification, synchronization, and robust information processing). These novel findings may explain the emergence of the CNR in mammalian neural networks as well. For example, signal amplification and robust information processing are essential for efficient cortical computations. Thus, it will be fascinating to study these cortical building blocks in light of the observed CNR once such connectomes become available. (10)

Bentley, Barry, et al. The Multilayer Connectome of Caenorhabditis elegans.. arXiv:1608.08793. A six person team from the MRC Laboratory of Molecular Biology, Cambridge, UK and Cambridge University, including neuroscientist Edward Bullmore, explore some ways in which neural networks, as a prototypical nonlinear system, can equally apply to the rudimentary sensory circuit of this roundworm phylum. And these many 2016 correlations across natural scales seem to find a generic dynamic complexity in the guise of a cerebral icon with an “omics” suffix which is effect from a quickening uniVerse to our human epitome.

Connectomics represents an effort to map brain structure at the level of individual neurons and their synaptic connections. However, neural circuits also depend on other types of interneuronal signalling, such as extrasynaptic modulation by monoamines and peptides. Here we present a draft monoamine connectome, along with a partial neuropeptide connectome, for the nematode C. elegans, based on new and published expression data for biosynthetic genes and receptors. We describe the structural properties of these "wireless" networks, including their topological features and modes of interaction with the wired synaptic and gap-¬junction connectomes. This multilayer connectome of C. elegans can serve as a prototype for understanding the multiplex networks comprising larger nervous systems, including the human brain. (Summary)

Betti, Alessandro and Marco Gori. The Principle of Least Cognitive Action. Theoretical Computer Science. 633/83, 2016. In a special issue on Biologically Inspired Processes in Neural Computation, edited by UM Amherst computer scientist Hava Siegelmann, a University of Pisa physicist and a University of Siena mathematician consider how nature’s evolutionary ascent of cerebral cognizance can be traced to and rooted in an intrinsic physical lawfulness. We even wonder if a new field such as “physioinformatics” (suggest a better term) could be added? See also in this issue The Grammar of Mammalian Brain Capacity, Topological Evolution for Embodied Cellular Automata, and A Circuit Basis for Morphogenesis.

By and large, the interpretation of learning as a computational process taking place in both humans and machines is primarily provided in the framework of statistics. In this paper, we propose a radically different perspective in which the emergence of learning is regarded as the outcome of laws of nature that govern the interactions of intelligent agents with their own environment. We introduce a natural learning theory based on the principle of least cognitive action, which is inspired to the related mechanical principle, and to the Hamiltonian framework for modeling the motion of particles. The introduction of the kinetic and of the potential energy leads to a surprisingly natural interpretation of learning as a dissipative process. The kinetic energy reflects the temporal variation of the synaptic connections, while the potential energy is a penalty that describes the degree of satisfaction of the environmental constraints. The theory gives a picture of learning in terms of the energy balancing mechanisms, where the novel notions of boundary and bartering energies are introduced. (Abstract)

In this paper, we investigate the emergence of learning as the outcome of laws of nature that govern the interactions of intelligent agents with their own environment, regardless of their nature. The underlying principle is that the acquisition of cognitive skills by learning obeys information-based laws on these interactions, which hold regardless of biology. In this new perspective, in particular, we introduce a natural learning theory aimed at discovering the fundamental temporally-embedded mechanisms of environmental interaction. (83-84) More than looking for smart algorithmic solutions to deal with temporal coherence, in this paper we rely on the of regarding learning as a process which is deeply interwound with time, just like in most laws of nature. (84)

Brun-Usan, Miguel, et al.. How to Fit In: The Learning Principles of Cell Differentiation. PLoS Computational Biology.. April, 2020. University of Southampton, UK, computer scientists including Richard Watson continue their revisionary studies of biological metabolisms by viewing them through a learning lens. A cerebral perspective, as this section reports, can provide better insights into cellular processes because both evolution and learning are explorations in search of solutions. A further step is to integrate this view with gene regulatory networks so these common models can reinforce each other. Altogether this approach implies that life’s oriented emergence is trying to achieve some manner of its own self-description and comprehension.

Cell differentiation in multicellular organisms requires cells to respond to complex combinations of extracellular cues, such as morphogen concentrations. But a general theory describing how cells integrate multi-dimensional signals is still lacking. In this work, we propose a framework from learning theory to understand the relationships between environmental cues (inputs) and phenotypic responses (outputs) underlying cell plasticity.. Altogether, these results illustrate the functional parallelisms between learning in neural networks and the action of natural selection on environmentally sensitive gene regulatory networks. This offers a theoretical framework for responses that integrate information from multiple cues, a phenomenon that underpins the evolution of multicellularity and developmental robustness. (Abstract excerpt)

Cabessa, Jeremie and Hava Siegelmann. The Computation Power of Interactive Recurrent Neural Networks. Network: Computation in Neural Systems. 24/4, 2012. Reviewed more in Universal Darwinism, University of Massachusetts, Amherst, computational neuroscientists take these cerebral complexities to exemplify how nature evolves, develops and learns. We are then invited to realize that the same dynamical trial and error, feedback to move forward, iterative process is in effect everywhere. See also Turing on Super-Turing and Adaptivity by Hava Siegelmann in Progress in Biophysics and Molecular Biology (113/117, 2013).

Chaistain, Erick, at al. Algorithms, Games, and Evolution. Proceedings of the National Academy of Sciences. 111/10620, 2014. This entry is cited in the Kate Douglas report that headlines the section. Reported more in Natural Algorithms, with Adi Livnat, Christos Papadimitriou, and Umesh Vazirani, theoretical biologists pose a working comparison between natural selection seen as a search, iterate, and optimize process, and a machine learning procedure called “multiplicative weight updates algorithm” (MWUA). Both modes involve explorations and samplings of diverse populations, subject to trials, errors, then retests, so to reach a “good enough” state or solution. A Commentary in the same issue, Diverse Forms of Selection in Evolution and Computer Science by Nicholas Barton, et al, supports the finding.

Churchill, Alexander, et al. Learning to Generate Genotypes with Neural Networks. arXiv: 1604.04153. Reviewed more in Natural Algorithms, Queen Mary University of London enter another way to understand how genomes and connectomes (neuromes) are basically similar in form and function.

Csermely, Peter, et al. Learning of Signaling Networks. arXiv:2001.11679. We cite this paper by five Semmelweis University, Budapest system scientists as an example of how cerebral facilities can be easily grafted onto and evident in all manner of genetic and metabolic anatomy and physiology, because they naturally spring from and manifest one, same source. By this perception, life’s long evolutionary development can increasingly appear as an oriented encephalization and cosmic education.

Molecular processes of neuronal learning have been well-described. However, learning mechanisms of non-neuronal cells have not been fully understood. Here, we discuss molecular mechanisms of cellular learning, including conformational memory of intrinsically disordered proteins and prions, signaling cascades, protein translocation, RNAs, and chromatin memory. We hypothesize that these processes constitute the learning of signaling networks and correspond to a generalized Hebbian learning process of single, non-neuronal cells. We then discuss how cellular learning may open novel directions in drug design and inspire new artificial intelligence methods. (Abstract)

de Vladar, Harold and Eors Szathmary. Neuronal Boost to Evolutionary Dynamics. Interface Focus. 5/6, 2015. In an issue on Are There Limits to Evolution?, Parmenides Foundation, Germany, and Eotvos University, Hungary, biologists, who have been collaborators on a novel view of life’s gestation akin to neural networks (search Watson), here view brain development via Darwinian genetic inheritance as a replicative neurodynamics. In this sense, analogous rugged landscapes serve a “fitness climbing and learning” function. A notable precursor cited is Hebb and Darwin by Paul Adams in the Journal of Theoretical Biology (195/419, 1998). A current companion would be Understanding Emergent Dynamics by John Hopfield, the 1980s founder of neural net theory (Neural Computation 27/10, 2015).

Standard evolutionary dynamics is limited by the constraints of the genetic system. A central message of evolutionary neurodynamics is that evolutionary dynamics in the brain can happen in a neuronal niche in real time, despite the fact that neurons do not reproduce. We show that Hebbian learning and structural synaptic plasticity broaden the capacity for informational replication and guided variability provided a neuronally plausible mechanism of replication is in place. The synergy between learning and selection is more efficient than the equivalent search by mutation selection. We also consider asymmetric landscapes and show that the learning weights become correlated with the fitness gradient. That is, the neuronal complexes learn the local properties of the fitness landscape, resulting in the generation of variability directed towards the direction of fitness increase, as if mutations in a genetic pool were drawn such that they would increase reproductive success. (Abstract)

In higher animals, complex and robust behaviors are produced by the microscopic details of large structured ensembles of neurons. I describe how the emergent computational dynamics of a biologically based neural network generates a robust natural solution to the problem of categorizing time-varying stimulus patterns such as spoken words or animal stereotypical behaviors. (Hopfield Abstract) Robust emergent behaviors of large systems (whether physical or biological) are a logical consequence of the behaviors and interactions of the more microscopic elements making up the large system. (2030).

Duran-Nebreda, Salva and George Bassel. Plant Behavior in Response to the Environment. Philosophical Transactions of the Royal Society B. 374/20190370, 2019. . In a special Liquid Brains, Solid Brains issue (search Forrest), University of Birmingham, UK botanists describe how even floral vegetation can be seen to embody and avail a faculty of cognitive intelligence for their benefit.

Information processing and storage underpins many biological processes of vital importance to organism survival. Like animals, plants also acquire, store and process environmental information relevant to their fitness, and this is particularly evident in their decision-making. The control of plant organ growth and timing of their developmental transitions are carefully orchestrated by the collective action of many connected computing agents, the cells, in what could be addressed as distributed computation. Here, we discuss some examples of biological information processing in plants, with special interest in the connection to formal computational models drawn from theoretical frameworks. (Abstract)

Fernando, Chrisantha, et al. Selectionist and Evolutionary Approaches to Brain Function. Frontiers in Computational Neuroscience. 6/Art. 24, 2012. With Eors Szathmary and Phil Husbands, another contribution that articulates the deep affinity of neural activities with life’s long iterative development. As Richard Watson, Hava Siegelmann, John Mayfield, Steven Frank, and increasing number contend, this achieves a 21st century appreciation of how “natural selection” actually applies. While a winnowing optimization toward “good enough to survive” goes on, the discovery of dynamic, learning-like, algorithms can now provide a prior genetic-like guidance.

We consider approaches to brain dynamics and function that have been claimed to be Darwinian. These include Edelman’s theory of neuronal group selection, Changeux’s theory of synaptic selection and selective stabilization of pre-representations, Seung’s Darwinian synapse, Loewenstein’s synaptic melioration, Adam’s selfish synapse, and Calvin’s replicating activity patterns. Except for the last two, the proposed mechanisms are selectionist but not truly Darwinian, because no replicators with information transfer to copies and hereditary variation can be identified in them. Bayesian models and reinforcement learning are formally in agreement with selection dynamics. A classification of search algorithms is shown to include Darwinian replicators (evolutionary units with multiplication, heredity, and variability) as the most powerful mechanism for search in a sparsely occupied search space. Finally, we review our recent attempts to construct and analyze simple models of true Darwinian evolutionary units in the brain in terms of connectivity and activity copying of neuronal groups. (Abstract)

The production of functional molecules is critical for life and also for an increasing proportion of industry. It is also important that genes represent what in cognitive science has been called a “physical symbol system.” Today, the genetic code is an arguably symbolic mapping between nucleotide triplets and amino acids. Moreover, enzymes “know” how to transform a substrate into a product, much like a linguistic rule “knows” how to act on some linguistic constructions to produce others. How can such functionality arise? Combinatorial chemistry is one of the possible approaches. The aim is to generate-and-test a complete library of molecules up to a certain length. (9)

In summary we have distinguished between selectionist and truly Darwinian theories, and have proposed a truly Darwinian theory of Darwinian Neurodynamics. The suggestion that true Darwinian evolution can happen in the brain during, say, complex thinking, or the development of language in children, is ultimately an empirical issue. Three possible outcomes are possible: (i) nothing beyond the synapse level undergoes Darwinian evolution in the brain; (ii) units of evolution will be identified that are very different from our “toy model” suggestions in this paper (and elsewhere); and (iii) some of the units correspond, with more complex details, to our suggested neuronal replicators. (17)

1 | 2 | 3  Next