|
VI. Life’s Cerebral Cognizance Becomes More Complex, Smarter, Informed, Proactive, Self-Aware1. Intelligence Evolution and Knowledge Gain as a Central Course Dodig-Crnkovic, Gordana. Morphological Computation and Learning to Learn in Natural Intelligent Systems and AI. arXiv:2004.02304. We note because the Chalmers University of Technology, Sweden computer theorist (search) proceeds to identify and enhance an apparently self-educating (autodidactic) natural evolutionary development. In regard, a bioinformatic agency can be seen in effect, which is “a system able to act on its own behalf.” A defining quality of living organisms is to engage in cognitive interactions, which altogether results in nature’s way of composing itself. See also Natural Computational Architectures for Cognitive Info-Communication by G D-C at arXiv:2110.06339 for even further insights. At present, artificial intelligence in the form of machine learning is making impressive progress, especially the field of deep learning (DL). Deep learning algorithms have been inspired from the beginning by nature, and specifically by the human brain. Learning from nature is a two-way process whereby computing is learning from neuroscience, while neuroscience is quickly adopting information processing models. The question is, what can the inspiration from computational nature at this stage of development contribute to deep learning and how much can models and experiments in machine learning motivate, justify and lead research in neuroscience and cognitive science to practical applications of artificial intelligence. (Abstract) Duran-Nebreda, Salva and George Bassel. Plant Behavior in Response to the Environment. Philosophical Transactions of the Royal Society B. 374/20190370, 2019. . In a special Liquid Brains, Solid Brains issue (search Forrest), University of Birmingham, UK botanists describe how even floral vegetation can be seen to embody and avail a faculty of cognitive intelligence for their benefit. Information processing and storage underpins many biological processes of vital importance to organism survival. Like animals, plants also acquire, store and process environmental information relevant to their fitness, and this is particularly evident in their decision-making. The control of plant organ growth and timing of their developmental transitions are carefully orchestrated by the collective action of many connected computing agents, the cells, in what could be addressed as distributed computation. Here, we discuss some examples of biological information processing in plants, with special interest in the connection to formal computational models drawn from theoretical frameworks. (Abstract) Fernando, Chrisantha. New Research Program: Evolutionary Neurodynamics. www.simons.berkeley.edu/workshops/abstracts/326. The Queen Mary University of London neuroscientist describes the frontiers of brain and cognitive science. I will give a broad overview of the research program that Prof. Eors Szathmary (Parmenides Foundation, Munich) and I have been carrying out since 2008 on Evolutionary Neurodynamics. Since 2013 this has been a FP-7 FET OPEN Project in collaboration with Luc Steels (UVB), Dario Floreano (EPFL), and Phil Husbands (Sussex). The hypothesis we explore is that some kind of natural selection algorithm is implemented in the brain, with entities that undergo multiplication, with variation and heredity. We have reason to believe that language learning is an evolutionary process occurring during development, in which populations of constructions compete for communicative success. We have reason to believe that during human problem solving, multiple solutions are entertained recombined and mutated in the brain. We have reason to believe that evolutionary methods provide a powerful ensemble approach to combine populations of decomposed and segmented predictive models of the world, policies, and value functions. Fernando, Chrisantha, et al. Selectionist and Evolutionary Approaches to Brain Function. Frontiers in Computational Neuroscience. 6/Art. 24, 2012. With Eors Szathmary and Phil Husbands, another contribution that articulates the deep affinity of neural activities with life’s long iterative development. As Richard Watson, Hava Siegelmann, John Mayfield, Steven Frank, and increasing number contend, this achieves a 21st century appreciation of how “natural selection” actually applies. While a winnowing optimization toward “good enough to survive” goes on, the discovery of dynamic, learning-like, algorithms can now provide a prior genetic-like guidance. We consider approaches to brain dynamics and function that have been claimed to be Darwinian. These include Edelman’s theory of neuronal group selection, Changeux’s theory of synaptic selection and selective stabilization of pre-representations, Seung’s Darwinian synapse, Loewenstein’s synaptic melioration, Adam’s selfish synapse, and Calvin’s replicating activity patterns. Except for the last two, the proposed mechanisms are selectionist but not truly Darwinian, because no replicators with information transfer to copies and hereditary variation can be identified in them. Bayesian models and reinforcement learning are formally in agreement with selection dynamics. A classification of search algorithms is shown to include Darwinian replicators (evolutionary units with multiplication, heredity, and variability) as the most powerful mechanism for search in a sparsely occupied search space. Finally, we review our recent attempts to construct and analyze simple models of true Darwinian evolutionary units in the brain in terms of connectivity and activity copying of neuronal groups. (Abstract) Fernando, Chrisantha, et al. The Neuronal Replicator Hypothesis. Neural Computation. 22/2809, 2010. As the decadal review expresses, in the 21st century, by our collaborative humankind, many cross-fertilizations between topical natural and social fields are underway. Here neuroscientists Fernando, with Richard Goldstein and Eors Szathmary, propose the presence of evolutionary algorithms in cerebral functions which search, improve, and select as we learn and think. See also Evolvable Neuronal Paths: A Novel Basis for Information and Search in the Brain by CF, et al, in PLoS One (6/8, 2011). We propose that replication (with mutation) of patterns of neuronal activity can occur within the brain using known neurophysiological processes. Thereby evolutionary algorithms implemented by neuronal circuits can play a role in cognition. Replication of structured neuronal representations is assumed in several cognitive architectures. Replicators overcome some limitations of selectionist models of neuronal search. (Abstract) Hasson, Uri, et al. Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks. Neuron. 105/3, 2020. In another example of cerebral cognition methods being readily applied everywhere, Princeton University neuroscientists point out how brain-based topologies and operations can have analytic utility in many other areas. Section headings include Interpolation and Extrapolation, Generalization Based on Partial and Big Data, and The Power of Adaptive Fit in Evolution. As the quotes allude, parallels can then be drawn between human cerebration and life’s neoDarwinian course whence organisms via sensory apparatus and activities must find a good enough way to survive and evolve. See also A Critique of Pure Learning and What Artificial Neural Networks can Learn from Animal Brains by Anthony Zador in Nature Communications (10/3770, 2019). Evolution is a blind fitting process by which organisms become adapted to their environment. Does the brain use similar brute-force fitting processes to learn how to perceive and act upon the world? Recent advances in artificial neural networks have exposed the power of optimizing millions of synaptic weights over millions of observations to operate robustly in real-world contexts. These models do not learn simple, human-interpretable rules or representations of the world; rather, they use local computations to interpolate over task-relevant manifolds in a high-dimensional parameter space. Similar to evolutionary processes, over-parameterized models can be simple and parsimonious, as they provide a versatile, robust solution for learning a diverse set of functions. This new family of direct-fit models are a radical challenge to many of the theoretical assumptions in psychology and neuroscience. (Abstract)
Hochberg, Michael, et al.
Innovation: An Emerging Focus from Cells to Societies.
Philosophical Transactions of the Royal Society B.
Vol. 372/Iss. 1736,
2017.
Hochberg, University of Montpellier, Pablo Marquet, Santa Fe Institute, Robert Boyd, Arizona State University, and Andreas Wagner, University of Zurich introduce a focus issue with this title. We place its 16 papers by leading researchers and theorists in this section because they attempt to specify an important tendency of life’s developmental evolution to seek behavioral, communal, and artificial novelties for survival and thrival. With a notice of the major transitions scale, an intensifying cumulative culture and intelligent knowledge can be traced from microbe colonies to civilizations. A. Wagner has been an advocate (search), which he reviews in Information Theory, Evolutionary Innovations and Evolvability. Douglas Erwin follows with The Topology of Evolutionary Novelty. Some other entries are The Origin of Heredity in Protocells, Nascent Life Cycles and the Emergence of Higher-Level Individuality, and Innovation and the Growth of Human Population. Innovations are generally unexpected, often spectacular changes in phenotypes and ecological functions. The contributions to this theme issue are the latest conceptual, theoretical and experimental developments, addressing how ecology, environment, ontogeny and evolution are central to understanding the complexity of the processes underlying innovations. Here, we set the stage by introducing and defining key terms relating to innovation and discuss their relevance to biological, cultural and technological change. Discovering how the generation and transmission of novel biological information, environmental interactions and selective evolutionary processes contribute to innovation as an ecosystem will shed light on how the dominant features across life come to be, generalize to social, cultural and technological evolution, and have applications in the health sciences and sustainability. (Main Abstract) Khajehabdollahi, Sina and Olaf Witkowski. Critical Learning vs. Evolution. Ikegami, Takashi, et al, eds.. ALIFE 2018 Conference Proceedings. Cambridge: MIT Press, 2018. A select paper from this online volume (Ikegami) by University of Western Ontario and Earth-Life Science Institute, Tokyo biophysicists who seek better insights into life’s quickening sentience by way of inherent complexity principles. By current turns, these dynamic phenomena appear to be increasingly cerebral in kind and function. In an extension of this view, just as brains are found to prefer and reside in a critically poised optimum state, so it seems that so does evolutionary developmental emergence. Criticality is thought to be crucial for complex systems to adapt, at the boundary between regimes with different dynamics, where the system may transition from one phase to another. Numerous systems, from sandpiles to gene regulatory networks, to swarms and human brains, seem to work towards preserving a precarious balance right at their critical point. Understanding criticality therefore seems strongly related to a broad, fundamental theory for the physics of life as it could be, which still lacks a clear description of how it can arise and maintain itself in complex systems. (Abstract excerpt) Kounios, Loizos, et al. How Evolution Learns to Improve Evolvability on Rugged Fitness Landscapes. arXiv:1612.05955. A contribution to the increasingly explanatory synthesis of cognitive learning theories with life’s developmental emergence. Coauthors include Richard Watson, Gunter Wagner, Jeff Clune and Mihaela Pavlicev. Kouvaris, Kostas, et al. How Evolution Learns to Generalise. arXiv:1508.06854. A University of Southampton group including Richard Watson, along with Jeff Clune, University of Wyoming, continue the innovative realization that much more than random mutation and selection must be at work for life to become increasingly aware, smart, and cognizant, if to think about it. By this novel insight, a progression of self-organizing neural, connectionist networks are seen to engender a quickening emergence. Due to a “deep homology,” creatures compare and assimilate new experience with a prior corpus of representations, similar to a human brain. A parallel chart of Learning and Evolutionary Theory matches up well, so to reveal a genesis synthesis of universal gestation. A beneficial inference would be the further advent of a worldwise sapiensphere coming to her/his own bicameral knowledge, which indeed is the premise of this website. One of the most intriguing questions in evolution is how organisms exhibit suitable phenotypic variation to rapidly adapt in novel selective environments which is crucial for evolvability. Recent work showed that when selective environments vary in a systematic manner, it is possible that development can constrain the phenotypic space in regions that are evolutionarily more advantageous. Yet, the underlying mechanism that enables the spontaneous emergence of such adaptive developmental constraints is poorly understood. How can natural selection, given its myopic and conservative nature, favour developmental organisations that facilitate adaptive evolution in future previously unseen environments? Such capacity suggests a form of foresight facilitated by the ability of evolution to accumulate and exploit information not only about the particular phenotypes selected in the past, but regularities in the environment that are also relevant to future environments. Lehman, Joel, et al. The Surprising Creativity of Digital Evolution. arXiv:1803.03453. As its Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities subtitle cites, some 50 coauthors who have engaged this project such as Chris Adami, Peter Bentley, Stephanie Forrest, Laurent Keller, Carole Knibbe, Richard Lenski, Hod Lipson, Robert Pennock, Thomas Ray, and Richard Watson offer their personal take. One could then observe that parallel versions of life’s long emergence now exist side by side – an older 19th and 20th century view of random natural selection, or this 21st century model with some manner of a generative source program at prior work. While chance and/or law is in abeyance, the entry scopes out an algorithmic, self-organizing, quickening genesis synthesis in ascendance. As a general conclusion, rather than aimless accident, a temporal, oriented course of open procreativity is traced going forward. Biological evolution provides a creative fount of complex and subtle adaptations. However, because evolution is an algorithmic process that transcends the substrate in which it occurs, its creativity is not limited to nature. Indeed, many researchers in the field of digital evolution have observed their evolving algorithms and organisms subverting their intentions, producing unexpected adaptations, or exhibiting outcomes uncannily convergent with ones in nature. This paper is the crowd-sourced product of researchers in the fields of artificial life and evolutionary computation who have provided first-hand accounts of such cases. It thus serves as a written, fact-checked collection of scientifically important and even entertaining stories. We also present here substantial evidence that the existence and importance of evolutionary surprises extends beyond the natural world, and may be a universal property of all complex evolving systems. (Abstract excerpt) Lin, Henry and Max Tegmark. Why does Deep and Cheap Learning Work so Well?. arXiv:1608.08225. The Harvard and MIT polymaths review the recent successes of these neural net, multiscale, algorithmic operations (definitions vary) from a statistical physics context such as renormalization groups and symmetric topologies. The authors collaborated with Tomaso Poggio of the MIT Center for Brains, Minds, and Machines (Google), and others, in their study which could be seen to imply a self-educating genesis cosmos which is trying to decipher, describe, recognize and affirm itself. We show how the success of deep learning depends not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can be approximated through "cheap learning" with exponentially fewer parameters than generic ones, because they have simplifying properties tracing back to the laws of physics. The exceptional simplicity of physics-based functions hinges on properties such as symmetry, locality, compositionality and polynomial log-probability, and we explore how these properties translate into exceptionally simple neural networks approximating both natural phenomena such as images and abstract representations thereof such as drawings. We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine-learning, a deep neural network can be more efficient than a shallow one. We formalize these claims using information theory and discuss the relation to renormalization group procedures. (Abstract)
|
||||||||||||||||||||||||||||||||||||||||||||||
HOME |
TABLE OF CONTENTS |
Introduction |
GENESIS VISION |
LEARNING PLANET |
ORGANIC UNIVERSE |