V. Systems Evolution: A 21st Century Genesis Synthesis
D. An Evolutionary Intelligence Arises
2020: Since 2015 we have reported a growing sense that as life evolves it does so by a directional emphasis on cerebral and cognitive qualities more so than somatic anatomy. An implication is that a developmental gestation seems to be skewed to and involved with becoming smarter and more informed. In this way, one can become a self-aware member in a coherent community.
Evolutionary Biology and the Theory of Computing. simons.berkeley.edu/programs/evolution2014. Reported more in Natural Algorithms, this is a Simons Foundation program in the spring of 2014 whose organizers include Christos Papadimitriou and Leslie Valiant. Various seminars were Evolutionary Biology Boot Camp (January tutorial) with Eugene Koonin; Computational Theories of Evolution, Nick Barton organizer, with speakers such as Richard Watson, Chrisantha Fernando, Adi Livnat, and Steven Frank; and New Directions in Probabilistic Models of Evolution.
Azulay, Aharon, et al. The C. elegans Connectome Consists of Homogenous Circuits with Defined Functional Roles. PLoS Computational Biology. Online September, 2016. At the frontier of integral syntheses across cerebral, social, and cognitive evolution, Hebrew University, Jerusalem, geneticists and neuroscientists distill from neural network complexities a common, independent principle which is at work from Internet media and human brains to this multicellular roundworm. To reflect and appreciate, a designated “connectome” can be seen in ramifying effect across life’s episodic emergence. See also in this section The Multilayer Connectome of Caenorhabditis elegans by Bentley, et al for a similar study.
A major goal of systems neuroscience is to decipher the structure-function relationship in neural networks. Here we study network functionality in light of the common-neighbor-rule (CNR) in which a pair of neurons is more likely to be connected the more common neighbors it shares. Focusing on the fully-mapped neural network of C. elegans worms, we establish that the CNR is an emerging property in this connectome. Moreover, sets of common neighbors form homogenous structures that appear in defined layers of the network. Simulations of signal propagation reveal their potential functional roles: signal amplification and short-term memory at the sensory/inter-neuron layer, and synchronized activity at the motoneuron layer supporting coordinated movement. A coarse-grained view of the neural network based on homogenous connected sets alone reveals a simple modular network architecture that is intuitive to understand. These findings provide a novel framework for analyzing larger, more complex, connectomes once these become available. (Abstract)
Bentley, Barry, et al. The Multilayer Connectome of Caenorhabditis elegans.. arXiv:1608.08793. A six person team from the MRC Laboratory of Molecular Biology, Cambridge, UK and Cambridge University, including neuroscientist Edward Bullmore, explore some ways in which neural networks, as a prototypical nonlinear system, can equally apply to the rudimentary sensory circuit of this roundworm phylum. And these many 2016 correlations across natural scales seem to find a generic dynamic complexity in the guise of a cerebral icon with an “omics” suffix which is effect from a quickening uniVerse to our human epitome.
Connectomics represents an effort to map brain structure at the level of individual neurons and their synaptic connections. However, neural circuits also depend on other types of interneuronal signalling, such as extrasynaptic modulation by monoamines and peptides. Here we present a draft monoamine connectome, along with a partial neuropeptide connectome, for the nematode C. elegans, based on new and published expression data for biosynthetic genes and receptors. We describe the structural properties of these "wireless" networks, including their topological features and modes of interaction with the wired synaptic and gap-¬junction connectomes. This multilayer connectome of C. elegans can serve as a prototype for understanding the multiplex networks comprising larger nervous systems, including the human brain. (Summary)
Betti, Alessandro and Marco Gori. The Principle of Least Cognitive Action. Theoretical Computer Science. 633/83, 2016. In a special issue on Biologically Inspired Processes in Neural Computation, edited by UM Amherst computer scientist Hava Siegelmann, a University of Pisa physicist and a University of Siena mathematician consider how nature’s evolutionary ascent of cerebral cognizance can be traced to and rooted in an intrinsic physical lawfulness. We even wonder if a new field such as “physioinformatics” (suggest a better term) could be added? See also in this issue The Grammar of Mammalian Brain Capacity, Topological Evolution for Embodied Cellular Automata, and A Circuit Basis for Morphogenesis.
By and large, the interpretation of learning as a computational process taking place in both humans and machines is primarily provided in the framework of statistics. In this paper, we propose a radically different perspective in which the emergence of learning is regarded as the outcome of laws of nature that govern the interactions of intelligent agents with their own environment. We introduce a natural learning theory based on the principle of least cognitive action, which is inspired to the related mechanical principle, and to the Hamiltonian framework for modeling the motion of particles. The introduction of the kinetic and of the potential energy leads to a surprisingly natural interpretation of learning as a dissipative process. The kinetic energy reflects the temporal variation of the synaptic connections, while the potential energy is a penalty that describes the degree of satisfaction of the environmental constraints. The theory gives a picture of learning in terms of the energy balancing mechanisms, where the novel notions of boundary and bartering energies are introduced. (Abstract)
Brun-Usan, Miguel, et al..
How to Fit In: The Learning Principles of Cell Differentiation.
PLoS Computational Biology..
University of Southampton, UK, computer scientists including Richard Watson continue their revisionary studies of biological metabolisms by viewing them through a learning lens. A cerebral perspective, as this section reports, can provide better insights into cellular processes because both evolution and learning are explorations in search of solutions. A further step is to integrate this view with gene regulatory networks so these common models can reinforce each other. Altogether this approach implies that life’s oriented emergence is trying to achieve some manner of its own self-description and comprehension.
Cell differentiation in multicellular organisms requires cells to respond to complex combinations of extracellular cues, such as morphogen concentrations. But a general theory describing how cells integrate multi-dimensional signals is still lacking. In this work, we propose a framework from learning theory to understand the relationships between environmental cues (inputs) and phenotypic responses (outputs) underlying cell plasticity.. Altogether, these results illustrate the functional parallelisms between learning in neural networks and the action of natural selection on environmentally sensitive gene regulatory networks. This offers a theoretical framework for responses that integrate information from multiple cues, a phenomenon that underpins the evolution of multicellularity and developmental robustness. (Abstract excerpt)
Cabessa, Jeremie and Hava Siegelmann. The Computation Power of Interactive Recurrent Neural Networks. Network: Computation in Neural Systems. 24/4, 2012. Reviewed more in Universal Darwinism, University of Massachusetts, Amherst, computational neuroscientists take these cerebral complexities to exemplify how nature evolves, develops and learns. We are then invited to realize that the same dynamical trial and error, feedback to move forward, iterative process is in effect everywhere. See also Turing on Super-Turing and Adaptivity by Hava Siegelmann in Progress in Biophysics and Molecular Biology (113/117, 2013).
Chaistain, Erick, at al. Algorithms, Games, and Evolution. Proceedings of the National Academy of Sciences. 111/10620, 2014. This entry is cited in the Kate Douglas report that headlines the section. Reported more in Natural Algorithms, with Adi Livnat, Christos Papadimitriou, and Umesh Vazirani, theoretical biologists pose a working comparison between natural selection seen as a search, iterate, and optimize process, and a machine learning procedure called “multiplicative weight updates algorithm” (MWUA). Both modes involve explorations and samplings of diverse populations, subject to trials, errors, then retests, so to reach a “good enough” state or solution. A Commentary in the same issue, Diverse Forms of Selection in Evolution and Computer Science by Nicholas Barton, et al, supports the finding.
Churchill, Alexander, et al. Learning to Generate Genotypes with Neural Networks. arXiv: 1604.04153. Reviewed more in Natural Algorithms, Queen Mary University of London enter another way to understand how genomes and connectomes (neuromes) are basically similar in form and function.
Csermely, Peter, et al. Learning of Signaling Networks. arXiv:2001.11679. We cite this paper by five Semmelweis University, Budapest system scientists as an example of how cerebral facilities can be easily grafted onto and evident in all manner of genetic and metabolic anatomy and physiology, because they naturally spring from and manifest one, same source. By this perception, life’s long evolutionary development can increasingly appear as an oriented encephalization and cosmic education.
Molecular processes of neuronal learning have been well-described. However, learning mechanisms of non-neuronal cells have not been fully understood. Here, we discuss molecular mechanisms of cellular learning, including conformational memory of intrinsically disordered proteins and prions, signaling cascades, protein translocation, RNAs, and chromatin memory. We hypothesize that these processes constitute the learning of signaling networks and correspond to a generalized Hebbian learning process of single, non-neuronal cells. We then discuss how cellular learning may open novel directions in drug design and inspire new artificial intelligence methods. (Abstract)
de Vladar, Harold and Eors Szathmary. Neuronal Boost to Evolutionary Dynamics. Interface Focus. 5/6, 2015. In an issue on Are There Limits to Evolution?, Parmenides Foundation, Germany, and Eotvos University, Hungary, biologists, who have been collaborators on a novel view of life’s gestation akin to neural networks (search Watson), here view brain development via Darwinian genetic inheritance as a replicative neurodynamics. In this sense, analogous rugged landscapes serve a “fitness climbing and learning” function. A notable precursor cited is Hebb and Darwin by Paul Adams in the Journal of Theoretical Biology (195/419, 1998). A current companion would be Understanding Emergent Dynamics by John Hopfield, the 1980s founder of neural net theory (Neural Computation 27/10, 2015).
Standard evolutionary dynamics is limited by the constraints of the genetic system. A central message of evolutionary neurodynamics is that evolutionary dynamics in the brain can happen in a neuronal niche in real time, despite the fact that neurons do not reproduce. We show that Hebbian learning and structural synaptic plasticity broaden the capacity for informational replication and guided variability provided a neuronally plausible mechanism of replication is in place. The synergy between learning and selection is more efficient than the equivalent search by mutation selection. We also consider asymmetric landscapes and show that the learning weights become correlated with the fitness gradient. That is, the neuronal complexes learn the local properties of the fitness landscape, resulting in the generation of variability directed towards the direction of fitness increase, as if mutations in a genetic pool were drawn such that they would increase reproductive success. (Abstract)
Duran-Nebreda, Salva and George Bassel. Plant Behavior in Response to the Environment. Philosophical Transactions of the Royal Society B. 374/20190370, 2019. . In a special Liquid Brains, Solid Brains issue (search Forrest), University of Birmingham, UK botanists describe how even floral vegetation can be seen to embody and avail a faculty of cognitive intelligence for their benefit.
Information processing and storage underpins many biological processes of vital importance to organism survival. Like animals, plants also acquire, store and process environmental information relevant to their fitness, and this is particularly evident in their decision-making. The control of plant organ growth and timing of their developmental transitions are carefully orchestrated by the collective action of many connected computing agents, the cells, in what could be addressed as distributed computation. Here, we discuss some examples of biological information processing in plants, with special interest in the connection to formal computational models drawn from theoretical frameworks. (Abstract)
Fernando, Chrisantha, et al. Selectionist and Evolutionary Approaches to Brain Function. Frontiers in Computational Neuroscience. 6/Art. 24, 2012. With Eors Szathmary and Phil Husbands, another contribution that articulates the deep affinity of neural activities with life’s long iterative development. As Richard Watson, Hava Siegelmann, John Mayfield, Steven Frank, and increasing number contend, this achieves a 21st century appreciation of how “natural selection” actually applies. While a winnowing optimization toward “good enough to survive” goes on, the discovery of dynamic, learning-like, algorithms can now provide a prior genetic-like guidance.
We consider approaches to brain dynamics and function that have been claimed to be Darwinian. These include Edelman’s theory of neuronal group selection, Changeux’s theory of synaptic selection and selective stabilization of pre-representations, Seung’s Darwinian synapse, Loewenstein’s synaptic melioration, Adam’s selfish synapse, and Calvin’s replicating activity patterns. Except for the last two, the proposed mechanisms are selectionist but not truly Darwinian, because no replicators with information transfer to copies and hereditary variation can be identified in them. Bayesian models and reinforcement learning are formally in agreement with selection dynamics. A classification of search algorithms is shown to include Darwinian replicators (evolutionary units with multiplication, heredity, and variability) as the most powerful mechanism for search in a sparsely occupied search space. Finally, we review our recent attempts to construct and analyze simple models of true Darwinian evolutionary units in the brain in terms of connectivity and activity copying of neuronal groups. (Abstract)