|
II. Pedia Sapiens: A Planetary Progeny Comes to Her/His Own Actual Factual Knowledge1. Earthificial Cumulative Cognizance: AI Large Language Models Learn Much Like a Child Sejnowski, Terrence. The Deep Learning Revolution. Cambridge: MIT Press, 2018. The renown neuroscientist author has been at the innovative center of the AI computational machine to brain and behavior neural network advance since the 1980s. He recounts his national and worldwide experience with many collaborators in this volume which make it the best general introduction to the field. A great gift for any student, as the author has also been involved with learning how to learn methods for schools. The book is filled with vignettes of Francis Crick, Geoffrey Hinton, Stephen Wolfram, Barbara Oakley, John Hopfield, Sydney Brenner, Christof Koch and others across the years. An example of his interests and reach is as a speaker at the 2016 Grand Challenges in 21st Century Science (Google) in Singapore. Terrence J. Sejnowski holds the Francis Crick Chair at the Salk Institute for Biological Studies and is a Distinguished Professor at the University of California, San Diego. He was a member of the advisory committee for the Obama administration's BRAIN initiative and is founding President of the Neural Information Processing (NIPS) Foundation. He has published twelve books, including (with Patricia Churchland) The Computational Brain (25th Anniversary Edition, MIT Press). Sejnowski, Terrence. The Unreasonable Effectiveness of Deep Learning in Artificial Intelligence. Proceedings of the National Academy of Science. 117/30033, 2020. The senior Salk Institute neurobiologist introduces a Colloquium on the Science of Deep Learning as this AI neural net frontier goes rapidly forward. Some papers are Emergent Linguistic Structure in Artificial Neural Networks and Algorithms as Discrimination Detectors. Deep learning networks have been trained to recognize speech, caption photographs, and translate text between languages. Although applications of deep learning networks to real-world problems have become ubiquitous, a deep understanding of why they are so effective lags behind. Paradoxes in their training and effectiveness are being investigated by way of the geometry of high-dimensional spaces. A mathematical theory would illuminate how they function, assess the strengths and weaknesses of network architectures, and more. (Abstract excerpt) Senior, Andrew, et al. Improved Protein Structure Prediction using Potentials from Deep Learning. Nature. 577/706, 2020. Nineteen DeepMind London and University College London researchers including Demis Hassabis describe novel algorithms which are able to study, predict, and create life’s primary, variegated, multi-purpose biomolecule. Dubbed AlphaFold, the advance is touted as a good example of how Artificial Intelligence AL can be of increasing value and utility. A commentary in the same issue is Protein Structure Prediction Gets Real by Mohammed AlQuraraishi (627). Shafiee, Mohammad, et al. Evolution in Groups: A Deeper Look at Synaptic Cluster Driven Evolution of Deep Neural Networks. arXiv:1704.02081. Shafiee and Elnaz Barshan, Iranian-Canadian, University of Waterloo, systems engineers, and Alexander Wong, DarwinAI (Waterloo), advance this frontier of a universe to human interpretation via multiplex computational cognitive dynamics. Their novel insight is a biological evolutionary setting by way of a deeper genetic and cerebral intelligence. Nature’s emergent profusion of cerebral nets are then alluded to as a generative offspring. See also by the authors these prior award-winning papers Evolutionary Synthesis of Deep Neural Networks via Synoptic Cluster-driven Genetic Encoding at 1609.01360, and Deep Learning with Darwin at 1606.04393. A promising paradigm for achieving highly efficient deep neural networks is the idea of evolutionary deep intelligence, which mimics biological evolution processes to progressively synthesize more efficient networks. A crucial design factor in evolutionary deep intelligence is the genetic encoding scheme used to simulate heredity and determine the architectures of offspring networks. In this study, we take a deeper look at the notion of synaptic cluster-driven evolution of deep neural networks which guides the evolution process towards the formation of a highly sparse set of synaptic clusters in offspring networks. Utilizing a synaptic cluster-driven genetic encoding, the probabilistic encoding of synaptic traits considers not only individual synaptic properties but also inter-synaptic relationships within a deep neural network. (Abstract) Shallue, Christopher and Andrew Vanderburg. Identifying Exoplanets with Deep Learning. arXiv:1712.05044. With A Five Planet Resonant Chain Around Kepler-80 and an Eighth Planet Around Kepler-90 subtitle, a Google Brain software engineer and a UT Austin astronomer report a novel application of artificial machine intelligence to successfully analyze huge data inputs from this planet-finder satellite. The achievement received wide press notice, along with a December 14 conference: NASA and Google to Announce AI Breakthrough. NASA's Kepler Space Telescope was designed to determine the frequency of Earth-sized planets orbiting Sun-like stars, but these planets are on the very edge of the mission's detection sensitivity. Accurately determining the occurrence rate of these planets will require automatically and accurately assessing the likelihood that individual candidates are indeed planets, even at low signal-to-noise ratios. We present a method for classifying potential planet signals using deep learning, a class of machine learning algorithms that have recently become state-of-the-art in a wide variety of tasks. We train a deep convolutional neural network to predict whether a given signal is a transiting exoplanet or a false positive caused by astrophysical or instrumental phenomena. Sheneman, Leigh and Arend Hintze. Evolving Autonomous Learning in Cognitive Networks. Nature Scientific Reports. 7/16712, 2017. Michigan State University computer scientists post an example of the on-going revision of artificial intelligence, broadly conceived, from decades of dead mechanisms to be in vital accord with evolutionary cerebral architectures and activities. See also The Role of Conditional Independence in the Evolution of Intelligence Systems from this group including Larissa Albantakis at arXiv:1801.05462. There are two common approaches for optimizing the performance of a machine: genetic algorithms and machine learning. A genetic algorithm is applied over many generations whereas machine learning works by applying feedback until the system meets a performance threshold. These methods have been previously combined, particularly in artificial neural networks using an external objective feedback mechanism. We adapt this approach to Markov Brains, which are evolvable networks of probabilistic and deterministic logic gates. We show that Markov Brains can incorporate these feedback gates in such a way that they do not rely on an external objective feedback signal, but instead can generate internal feedback that is then used to learn. This results in a more biologically accurate model of the evolution of learning, which will enable us to study the interplay between evolution and learning. (Abstract) Smith, Michael J. and James Geach. Astronomia ex Machina: A History, Primer and Outlook on Neural Networks in Astronomy. Royal Society Open Science. November, 2022. University of Hertfordshire computer scientists post a detailed 21st century recount of this ascendant turn from local homo sapience when computers and internet websites came online in the early 2000s to a worldwise cerebral activity today. But this spiral anthropic to Earthuman stage proceeds by way of machine computations which can analyze vast cosmic data flows on their own. See also, for example, A Neural Network Subgrid Model of the Early Stages of Planet Formation by Thomas Pfeil, et al at arXiv:2211.04160. In recent years, deep learning procedures have been taken up by many fields because it reduces the need for specialist knowledge and automating the process of knowledge discovery from data. This review describes how astronomy is similarly in the midst of a deep learning transformation. We trace astronomical connectionism from early multilayer perceptrons, through to recurrent neural networks, onto the current wave of self-supervised and unsupervised methods. We then preview a fourth phase of a “foundational” model by way of a symbiotic relationship between astro=science and connectionism. (Abstract excerpt) Soltoggio, Andrea, et al. Born to Learn: the Inspiration, Progress, and Future of Evolved Plastic Artificial Neural Networks. Neural Networks. 108/48, 2018. Loughborough University, University of Central Florida and University of Copenhagen computer scientists draw upon the evolutionary and biological origins of this ubiquitous multicomplex learning system to achieve further understandings and usages. Their theme is that life’s temporal development seems to be a learning, neuromodulation, plasticity, and discovery progression. The approach is seen as akin to the Evolutionary Neurodynamics school of Richard Watson, et al, see section V.C. See also herein Evolution in Groups: A Deeper Look at Synaptic Cluster Driven Evolution of Deep Neural Networks (M. Shafiee) and other similar entries. Biological neural networks are systems of extraordinary computational capabilities shaped by evolution, development, and lifetime learning. Inspired by such intricate natural phenomena, Evolved Plastic Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed plastic neural networks, artificial systems composed of sensors, outputs, and plastic components that change in response to sensory-output experiences in an environment. These systems may reveal key algorithmic ingredients of adaptation, autonomously discover novel adaptive algorithms, and lead to hypotheses on the emergence of biological adaptation. In particular, the limitations of hand-designed structures and algorithms currently used in most deep neural networks could be overcome by more flexible and innovative solutions. This paper brings together a variety of inspiring ideas that define the field of EPANNs. (Abstract) Spraque, Kyle, et al. Watch and Learn – A Generalized Approach for Transferrable Learning in Deep Neural Networks via Physical Principles. Machine Learning: Science and Technology.. 2/2, 2021. We enter a typical paper from this new Institute of Physics IOP journal so to report current research frontiers as AI neural net facilities join forces with systems physics and quantum organics. Here University of Ottawa, University of Waterloo, Canada, and Lawrence BNL theorists including Juan Carasquilla and Steve Whitelam discuss the natural affinities that these far removed realms seem to innately possess. See also Halverson, James, et al. Neural Networks and Quantum Field Theory by James Halverson, et al (2/3, 2021) and Natural Evolutionary Strategies for Variational Quantum Computation by Abhinav Anand, et al (2/4, 2021). Altogether, our phenomenal Earthuman abilities can begin a new era of participatory self-observance, description and discovery. Transfer learning refers to the use of knowledge gained while solving a machine learning task and applying it to the solution of a closely related problem. Here we demonstrate an unsupervised learning approach augmented with physical principles that achieves transferrable content for problems in statistical physics across different regimes. By coupling a sequence model based on a recurrent neural network to an extensive deep neural network, we are able to discern the equilibrium probability distributions and inter-particle interaction models of classical statistical mechanical systems. This constitutes a fully transferrable physics-based learning in a generalizable approach. (Spraque Abstract) Stanley, Kenneth, et al. Designing Neural Networks through Neuroevolution. Nature Machine Intelligence. January, 2019. Uber AI Labs, San Francisco researchers including Jeff Clune provide a tutorial to date for this active field to intentionally but respectfully facilitate external cognitive facilities. See also is this new journal and issue Evolving Embodies Intelligence from Materials to Machines by Davie Howard, et al. Much of recent machine learning has focused on deep learning, in which neural network weights are trained through variants of stochastic gradient descent. An alternative approach comes from the field of neuroevolution, which harnesses evolutionary algorithms to optimize neural networks, inspired by the fact that natural brains themselves are the products of an evolutionary process. Neuroevolution enables important capabilities that are typically unavailable to gradient-based approaches, including learning neural network building blocks, hyperparameters, architectures and algorithms for learning itself. Neuroevolution differs deep reinforcement learning via a population of solutions during search, enabling exploration and parallelization. This Review looks at several key aspects of modern neuroevolution, including large-scale computing, the benefits of novelty and diversity, the power of indirect encoding, and the field’s contributions to meta-learning and architecture search. (Abstract excerpt) Stevenson, Claire, et al. Do large language models solve verbal analogies like children do?. arXiv:2310.20384. University of Amsterdam psychologists including Ekaterina Shutova cite another present recognition of a basic correspondence, in this title case, of how youngsters draw on commonalities and associations between items or situations and what it seems these AI chatBot procedures arealso trying to do.
Taylor, P., et al. The Global Landscape of Cognition: Hierarchical Aggregation as an Organizational Principle of Human Cortical Networks and Functions. Nature Scientific Reports. 5/18112, 2019. As the deep neural network revolution began via theory and neuroimaging, UM Amherst neuroscientists including Hava Siegelmann attest to a nested connectome architecture which then serves cognitive achievements. On page 15, a graphic pyramid rises from a somatosensory, prosodic base through five stages to reason, language, visual concepts. Might one now imagine this scale as a personal ontogeny recap of life’s evolutionary sapient awakening? See Deep Neural Networks Abstract like Humans by Alex Gain and Hava Siegelmann at arXiv:1905.11515 for a 2019 version.
Previous 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 Next
|
||||||||||||||||||||||||||||||||||||||||||||||
HOME |
TABLE OF CONTENTS |
Introduction |
GENESIS VISION |
LEARNING PLANET |
ORGANIC UNIVERSE |