![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
![]() |
![]() |
||||||||||
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
|
VI. Life’s Cerebral Cognizance Becomes More Complex, Smarter, Informed, Proactive, Self-Aware1. Intelligence Evolution and Knowledge Gain as a Central Course Kouvaris, Kostas, et al. How Evolution Learns to Generalise. arXiv:1508.06854. A University of Southampton group including Richard Watson, along with Jeff Clune, University of Wyoming, continue the innovative realization that much more than random mutation and selection must be at work for life to become increasingly aware, smart, and cognizant, if to think about it. By this novel insight, a progression of self-organizing neural, connectionist networks are seen to engender a quickening emergence. Due to a “deep homology,” creatures compare and assimilate new experience with a prior corpus of representations, similar to a human brain. A parallel chart of Learning and Evolutionary Theory matches up well, so to reveal a genesis synthesis of universal gestation. A beneficial inference would be the further advent of a worldwise sapiensphere coming to her/his own bicameral knowledge, which indeed is the premise of this website. One of the most intriguing questions in evolution is how organisms exhibit suitable phenotypic variation to rapidly adapt in novel selective environments which is crucial for evolvability. Recent work showed that when selective environments vary in a systematic manner, it is possible that development can constrain the phenotypic space in regions that are evolutionarily more advantageous. Yet, the underlying mechanism that enables the spontaneous emergence of such adaptive developmental constraints is poorly understood. How can natural selection, given its myopic and conservative nature, favour developmental organisations that facilitate adaptive evolution in future previously unseen environments? Such capacity suggests a form of foresight facilitated by the ability of evolution to accumulate and exploit information not only about the particular phenotypes selected in the past, but regularities in the environment that are also relevant to future environments. Lehman, Joel, et al. The Surprising Creativity of Digital Evolution. arXiv:1803.03453. As its Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities subtitle cites, some 50 coauthors who have engaged this project such as Chris Adami, Peter Bentley, Stephanie Forrest, Laurent Keller, Carole Knibbe, Richard Lenski, Hod Lipson, Robert Pennock, Thomas Ray, and Richard Watson offer their personal take. One could then observe that parallel versions of life’s long emergence now exist side by side – an older 19th and 20th century view of random natural selection, or this 21st century model with some manner of a generative source program at prior work. While chance and/or law is in abeyance, the entry scopes out an algorithmic, self-organizing, quickening genesis synthesis in ascendance. As a general conclusion, rather than aimless accident, a temporal, oriented course of open procreativity is traced going forward. Biological evolution provides a creative fount of complex and subtle adaptations. However, because evolution is an algorithmic process that transcends the substrate in which it occurs, its creativity is not limited to nature. Indeed, many researchers in the field of digital evolution have observed their evolving algorithms and organisms subverting their intentions, producing unexpected adaptations, or exhibiting outcomes uncannily convergent with ones in nature. This paper is the crowd-sourced product of researchers in the fields of artificial life and evolutionary computation who have provided first-hand accounts of such cases. It thus serves as a written, fact-checked collection of scientifically important and even entertaining stories. We also present here substantial evidence that the existence and importance of evolutionary surprises extends beyond the natural world, and may be a universal property of all complex evolving systems. (Abstract excerpt) Lin, Henry and Max Tegmark. Why does Deep and Cheap Learning Work so Well?. arXiv:1608.08225. The Harvard and MIT polymaths review the recent successes of these neural net, multiscale, algorithmic operations (definitions vary) from a statistical physics context such as renormalization groups and symmetric topologies. The authors collaborated with Tomaso Poggio of the MIT Center for Brains, Minds, and Machines (Google), and others, in their study which could be seen to imply a self-educating genesis cosmos which is trying to decipher, describe, recognize and affirm itself. We show how the success of deep learning depends not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can be approximated through "cheap learning" with exponentially fewer parameters than generic ones, because they have simplifying properties tracing back to the laws of physics. The exceptional simplicity of physics-based functions hinges on properties such as symmetry, locality, compositionality and polynomial log-probability, and we explore how these properties translate into exceptionally simple neural networks approximating both natural phenomena such as images and abstract representations thereof such as drawings. We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine-learning, a deep neural network can be more efficient than a shallow one. We formalize these claims using information theory and discuss the relation to renormalization group procedures. (Abstract)
Livnat, Adi.
Simplification, Innateness, and the Absorption of Meaning from Context.
Evolutionary Biology.
Online March,
2017.
Reviewed more in Systems Evolution, the University of Haifa theorist continues his project (search) to achieve a better explanation of life’s evolution by way of algorithmic computations, innate network propensities, genome – language affinities, neural net deep learning, and more. Mitchell, Kevin and Nick Cheney. The Genomic Code: The genome instantiates a generative model of the organism. Trends in Genetics. February, 2025. Into 2024, a Trinity College, Dublin neuro-geneticist (search) and a University of Vermont computational biologist (see NC website) propose to join neural net learning methods with views of life’s cognitive emergence, as this section conveys, to advance an integral understanding. dynamic relationship between the genome and organismal form. Here, we propose a new analogy inspired by machine learning and neuroscience whence the genome becomes a compressed space of latent variables which are DNA sequences that specify the biochemical properties of encoded proteins. Collectively, these comprise a connectionist network that is encoded by an evolutionary learning algorithm. An energy landscape then constrains a self-organising development so as to produce a new individual, akin to Conrad Waddington’s epigenetic landscape. (Abstract) Ng, Eden Tian Hwa and Akira Kinjo. Computational Modelling of Plasticity-Led Evolution. arXiv:2208.00649. In our late day of a global knowsphere whose composite EarthKinder contents are commonly accessible, University of Brunei, Darussalam, Island of Borneo biomathematicians advance and finesse alternative views upon how life might have arisen along with an especial educative capacity. By this vista, into the 2020s a natural, necessary affinity between genomic and cerebral networks and functions is becoming newly apparent. In a plasticity-led evolution, a change in the environment induces novel traits via phenotypic variability, after which such traits are genetically accommodated. While this hypothesis is served by experimental findings, here we propose computational methods to gain insight into underlying mechanisms. These models include the developmental process and gene-environment interactions, along with genetics and selection. Our results apply to such GRNs which represent consolidate the criteria of plasticity-led evolution. Since gene regulatory networks are mathematically equivalent to artificial recurrent neural networks, we discuss their analogies and discrepancies to help understand the mechanisms underlying plasticity-led evolution. Oudeyer, Pierre-Yves and Linda Smith. How Evolution May Work Through Curiosity-Driven Development Process. Topics in Cognitive Science. 8/2, 2016. The authors are a French Institute for Research in Computer Science (INRIA) director and the Indiana University neuropsychologist who was a 1990s founder with the late Evelyn Thelen of developmental systems theories for infant and child maturation. We include the contribution in this new section as another example of deep parallels between life’s long course to ourselves and intrinsic natural propensities to advance in cognitive learning capabilities. See Oudeyer’s publication page for more articles on this self-starter view. Infants' own activities create and actively select their learning experiences. Here we review recent models of embodied information seeking and curiosity-driven learning and show that these mechanisms have deep implications for development and evolution. We discuss how these mechanisms yield self-organized epigenesis with emergent ordered behavioral and cognitive developmental stages. We describe a robotic experiment that explored the hypothesis that progress in learning, in and for itself, generates intrinsic rewards: The robot learners probabilistically selected experiences according to their potential for reducing uncertainty. In these experiments, curiosity-driven learning led the robot learner to successively discover object affordances and vocal interaction with its peers. We explain how a learning curriculum adapted to the current constraints of the learning system automatically formed, constraining learning and shaping the developmental trajectory. The observed trajectories in the robot experiment share many properties with those in infant development, including a mixture of regularities and diversities in the developmental patterns. Finally, we argue that such emergent developmental structures can guide and constrain evolution, in particular with regard to the origins of language. (Abstract) Peñaherrera-Aguirre, Mareo, et al. Possible evidence for the Law of General Intelligence in honeybees.. Intelligence. 106/101856, 2024. This mid 2020s entry by Animal and neurobiologists can now proceed to extend the General Cognitive Abilities measure from human psychology all the way back through life’s vectorial evolutionary development of sensory smarts to an early invertebrate domain. Looking forward, one might gain a sense of an ecosmic, participatory imperative to achieve its own self-description, witness and selection. These findings by Finke, et al (Individual consistency in the learning abilities of honey bees, Animal Cognition, 2023) support hypotheses that GCA (general cognitive ability) influences covariation between cognitive measures in honeybees, and constitute the first formal demonstration of GCA in an invertebrate. It is argued that these GCA might be ubiquitous with respect to metazoans with organized nervous systems which have convergently evolved multiple times in independent phylogenies. These features are a key prediction of Christopher Chabris' “primordial” Law of General Intelligence (2014) and have now been identified in insect, avian, mammal, and fish taxa. (Abstract) Pinero, Jordi and Ricard Sole. Statistical Physics of Liquid Brains. Philosophical Transactions of the Royal Society B. 374/20180376, 2018. In a special Liquid Brains, Solid Brains issue (search Forrest), Institut de Biologia Evolutiva, Universitat Pompeu Fabra, Barcelona theorists consider a universal recurrence in kind of the same generic complex network system across natural and social domains. While akin to genomes and ecosystems, an apt model is cerebral cognition, broadly conceived, by way of agental neurons and synaptic links in multiplex arrays. A prime attribute is a cross-conveyance of intelligence and information, aka biological computation, which is how animal groupings from invertebrates to mammals to people achieve a collective decision-making. Liquid neural networks (or ‘liquid brains’) are a widespread class of cognitive living networks characterized by a common feature: the agents move in space. Thus, no fixed, long-term agent-agent connections are maintained, in contrast with standard neural systems. How is this class of systems capable of displaying cognitive abilities, from learning to decision-making? In this paper, the collective dynamics, memory and learning properties of liquid brains is explored under the perspective of statistical physics. We review the generic properties of three large classes of systems, namely: standard neural networks (solid brains), ant colonies and the immune system. It is shown that, despite their intrinsic differences, these systems share key properties with neural systems in terms of formal descriptions, but depart in other ways. (Abstract excerpt) Pontes, Anselmo, et al. The Evolutionary Origin of Associative Learning. American Naturalist. 195/1, 2020. By way of clever digital simulations in Richard Lenski’s lab, Michigan State University researchers including Christoph Adami test whether this analogic edification, drawn much from Simona Ginsberg and Eva Jablonka (see definitions below), is actually in effect. Indeed, results over many generations show that life does become smarter by a constant, iterative, combinational process of trials, errors and successes for both entities and groups. From 2020, a central developmental trend of “stepwise, modular, complex behaviors” as an open-ended creativity is evidentially traced and oriented. Learning is a widespread ability among animals and is subject to evolution. But how did learning first arise? What selection pressures and phenotypic preconditions fostered its evolution? Neither the fossil record nor phylogenetic comparative studies provide answers. Here, we study digital organisms in environments that promote the evolution of navigation and associative learning. Starting with a sessile ancestor, we evolve multiple populations in four environments, each with nutrient trails with various layouts. We find that behavior evolves modularly and in a predictable sequence. Environmental patterns that are stable across generations foster the evolution of reflexive behavior, while environmental patterns that vary across generations but remain consistent for periods within an organism’s lifetime foster the evolution of learning behavior. (Abstract excerpt) Powell, Russell, et al. Convergent Minds: The Evolution of Cognitive Complexity in Nature. Interface Focus. 7/3, 2017. Powell, Boston University philosophy of science, Irina Mikhalevich, Humboldt University cognitive philosophy, Corina Logan, Cambridge University zoology, and Nicola Clayton, Cambridge University animal psychology, introduce a special issue on this title subject. The paper opens with a recount of Steven Jay Gould’s 1990 claim that because, as he held no innate drive or direction exists, if the tape of life’s course were played over sapient human beings would not appear. It then goes on to note that many research findings into the 2010s strongly attest to the opposite conclusion. Typical entries are Evolutionary Convergence and Biologically Embodied Cognition by Fred Keijzer, The Foundations of Plant Intelligence by Anthony Trewavas, and Is Behavioral Flexibility Evidence of Cognitive Complexity? by Irina Mikhalevich, et al. As this site documents (e.g. Conway Morris, McGhee), across genomic and metabolic phases to especially cerebral qualities, a constant repetition of forms and capabilities does occur, which traces a constrained emergence. While prior bias against any path or axis causes a hesitancy, whose acceptance would require a new theory, these contributions and more allude that life and mind appear to know where they are going, and how to get there. Power, Daniel, et al. What can Ecosystems Learn? Expanding Evolutionary Ecology with Learning Theory. arXiv:1506.06374. This posting is a follow up to a 2014 paper The Evolution of Phenotypic Correlations and “Development Memory” in Evolution (68/4, Richard Watson) that introduced affinities between life’s ascent and cerebral cognition. The interdisciplinary authors from the UK across Europe, including Watson and Eors Szathmary, continue and expand upon a novel perception of life’s developmental emergence as a dynamic ecological coherence via a connectionist cognition of self-organizing neural networks. By this “deep homology,” prior evolutionary and ecosystem “memories” accrue with which new experience can then be accommodated. In a synthesis of mind and matter, this essential cerebration goes on prior to any Darwinian selection. A published, peer reviewed edition appears in Biology Direct (10/69, 2015). Understanding how the structure of community interactions is modified by coevolution is vital for understanding system responses to change at all scales. However, in absence of a group selection process, collective community behaviours cannot be organised or adapted in a Darwinian sense. An open question thus persists: are there alternative organising principles that enable us to understand how coevolution of component species creates complex collective behaviours exhibited at the community level? We address this issue using principles from connectionist learning, a discipline with well-developed theories of emergent behaviours in simple networks. We identify conditions where selection on ecological interactions is equivalent to 'unsupervised learning' (a simple type of connectionist learning) and observe that this enables communities to self organize without community-level selection. Despite not being a Darwinian unit, ecological communities can behave like connectionist learning systems, creating internal organisation that habituates to past environmental conditions and actively recalling those conditions. (Abstract)
|
![]() |
|||||||||||||||||||||||||||||||||||||||||||||
HOME |
TABLE OF CONTENTS |
Introduction |
GENESIS VISION |
LEARNING PLANET |
ORGANIC UNIVERSE |