(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

II. Planetary Prodigy: A Sapiensphere Comes to Her/His Own Knowledge

1. AI: A Survey of Deep Neural Network Learning

Schmidhuber, Jurgen. Deep Learning in Neural Networks: An Overview. Neural Networks. 61/2, 2015. A technical tutorial by the University of Lugano, Switzerland expert upon advances in artificial or machine learning techniques, based on how our own brains think. Sophisticated algorithms, multiple processing layers with complex structures, assignment paths, non-linear transformations, and so on are at work as they refer new experiences to prior representations for comparison. See also, for example, Semantics, Representations and Grammars for Deep Learning by David Balduzzi at arXiv:1509.08627. Our interest recalls recent proposals by Richard Watson, Eors Szathamary, et al to appreciate life’s evolution as quite akin to a neural net, connectionist learning process.

Schneider, Susan. Artificial You: AI and the Future of Your Mind. Princeton: Princeton University Press, 2019. The NASA/Baruch Blumberg Chair at the Library of Congress and cultural communicator provides an accessible, perceptive survey of these diverse algorithmic augmentations as they rush in to reinvent, empower and maybe imperil persons and societies. Of especial interest is the chapter A Universe of Singularities in a Postbiological Cosmos, whence it is assumed that something like the possible transfer (take over) by degrees from human beings (cyborgian) to myriad technological devices (Computocene) phase will have occurred by the billions across the galaxies. It is then contended that this occasion need be factored into exolife searches.

Schuchardt, Jan, et al.. Learning to Evolve. arXiv:1905.03389. Technical University of Munich informatics researchers advance ways to employ evolution-based algorithms which in turn shows how life’s long development can appear as a computational process. From our late vantage, it may seem that a cosmic genesis needs to pass on this genetic-like agency to our own continuance.

Evolution and learning are two of the fundamental mechanisms by which life adapts in order to survive and to transcend limitations. These biological phenomena inspired successful computational methods such as evolutionary algorithms and deep learning. Evolution relies on random mutations and on random genetic recombination. Here we show that learning to evolve, i.e. learning to mutate and recombine better than at random, improves the result of evolution in terms of fitness increase per generation and even in terms of attainable fitness. We use deep reinforcement learning to learn to dynamically adjust the strategy of evolutionary algorithms to varying circumstances. (Abstract)

Schuman, Catherine, et al. A Survey of Neuromorphic Computing and Neural Networks in Hardware. arXiv:1705.06963. Oak Ridge Laboratory and University of Tennessee researchers provide a copious review of progress from years of machine computation to this novel advance to artificially avail the way our own iterative brains so adeptly recognize shapes and patterns. The moniker Neuromorphic means how fast we can say cat or car.

Neuromorphic computing has come to refer to a variety of brain-inspired computers, devices, and models that contrast the pervasive von Neumann computer architecture. This biologically inspired approach has created highly connected synthetic neurons and synapses that can be used to model neuroscience theories as well as solve challenging machine learning problems. The promise of the technology is to create a brain-like ability to learn and adapt, but the technical challenges are significant, starting with an accurate neuroscience model of how the brain works, to finding materials and engineering breakthroughs to build devices to support these models, to creating a programming framework so the systems can learn, to creating applications with brain-like capabilities. In this work, we provide a comprehensive survey of the research and motivations for neuromorphic computing over its history.

Sejnowski, Terrence. The Deep Learning Revolution. Cambridge: MIT Press, 2018. The renown neuroscientist author has been at the innovative center of the AI computational machine to brain and behavior neural network advance since the 1980s. He recounts his national and worldwide experience with many collaborators in this volume which make it the best general introduction to the field. A great gift for any student, as the author has also been involved with learning how to learn methods for schools. The book is filled with vignettes of Francis Crick, Geoffrey Hinton, Stephen Wolfram, Barbara Oakley, John Hopfield, Sydney Brenner, Christof Koch and others across the years. An example of his interests and reach is as a speaker at the 2016 Grand Challenges in 21st Century Science (Google) in Singapore.

Terrence J. Sejnowski holds the Francis Crick Chair at the Salk Institute for Biological Studies and is a Distinguished Professor at the University of California, San Diego. He was a member of the advisory committee for the Obama administration's BRAIN initiative and is founding President of the Neural Information Processing (NIPS) Foundation. He has published twelve books, including (with Patricia Churchland) The Computational Brain (25th Anniversary Edition, MIT Press).

Senior, Andrew, et al. Improved Protein Structure Prediction using Potentials from Deep Learning. Nature. 577/706, 2020. Nineteen DeepMind London and University College London researchers including Demis Hassabis describe novel algorithms which are able to study, predict, and create life’s primary, variegated, multi-purpose biomolecule. Dubbed AlphaFold, the advance is touted as a good example of how Artificial Intelligence AL can be of increasing value and utility. A commentary in the same issue is Protein Structure Prediction Gets Real by Mohammed AlQuraraishi (627).

Shafiee, Mohammad, et al. Evolution in Groups: A Deeper Look at Synaptic Cluster Driven Evolution of Deep Neural Networks. arXiv:1704.02081. Shafiee and Elnaz Barshan, Iranian-Canadian, University of Waterloo, systems engineers, and Alexander Wong, DarwinAI (Waterloo), advance this frontier of a universe to human interpretation via multiplex computational cognitive dynamics. Their novel insight is a biological evolutionary setting by way of a deeper genetic and cerebral intelligence. Nature’s emergent profusion of cerebral nets are then alluded to as a generative offspring. See also by the authors these prior award-winning papers Evolutionary Synthesis of Deep Neural Networks via Synoptic Cluster-driven Genetic Encoding at 1609.01360, and Deep Learning with Darwin at 1606.04393.

A promising paradigm for achieving highly efficient deep neural networks is the idea of evolutionary deep intelligence, which mimics biological evolution processes to progressively synthesize more efficient networks. A crucial design factor in evolutionary deep intelligence is the genetic encoding scheme used to simulate heredity and determine the architectures of offspring networks. In this study, we take a deeper look at the notion of synaptic cluster-driven evolution of deep neural networks which guides the evolution process towards the formation of a highly sparse set of synaptic clusters in offspring networks. Utilizing a synaptic cluster-driven genetic encoding, the probabilistic encoding of synaptic traits considers not only individual synaptic properties but also inter-synaptic relationships within a deep neural network. (Abstract)

Shallue, Christopher and Andrew Vanderburg. Identifying Exoplanets with Deep Learning. arXiv:1712.05044. With A Five Planet Resonant Chain Around Kepler-80 and an Eighth Planet Around Kepler-90 subtitle, a Google Brain software engineer and a UT Austin astronomer report a novel application of artificial machine intelligence to successfully analyze huge data inputs from this planet-finder satellite. The achievement received wide press notice, along with a December 14 conference: NASA and Google to Announce AI Breakthrough.

NASA's Kepler Space Telescope was designed to determine the frequency of Earth-sized planets orbiting Sun-like stars, but these planets are on the very edge of the mission's detection sensitivity. Accurately determining the occurrence rate of these planets will require automatically and accurately assessing the likelihood that individual candidates are indeed planets, even at low signal-to-noise ratios. We present a method for classifying potential planet signals using deep learning, a class of machine learning algorithms that have recently become state-of-the-art in a wide variety of tasks. We train a deep convolutional neural network to predict whether a given signal is a transiting exoplanet or a false positive caused by astrophysical or instrumental phenomena.

We apply our model to a new set of candidate signals that we identified in a search of known Kepler multi-planet systems. We statistically validate two new planets that are identified with high confidence by our model. One of these planets is part of a five-planet resonant chain around Kepler-80, with an orbital period closely matching the prediction by three-body Laplace relations. The other planet orbits Kepler-90, a star which was previously known to host seven transiting planets. Our discovery of an eighth planet brings Kepler-90 into a tie with our Sun as the star known to host the most planets. (Abstract)

Sheneman, Leigh and Arend Hintze. Evolving Autonomous Learning in Cognitive Networks. Nature Scientific Reports. 7/16712, 2017. Michigan State University computer scientists post an example of the on-going revision of artificial intelligence, broadly conceived, from decades of dead mechanisms to be in vital accord with evolutionary cerebral architectures and activities. See also The Role of Conditional Independence in the Evolution of Intelligence Systems from this group including Larissa Albantakis at arXiv:1801.05462.

There are two common approaches for optimizing the performance of a machine: genetic algorithms and machine learning. A genetic algorithm is applied over many generations whereas machine learning works by applying feedback until the system meets a performance threshold. These methods have been previously combined, particularly in artificial neural networks using an external objective feedback mechanism. We adapt this approach to Markov Brains, which are evolvable networks of probabilistic and deterministic logic gates. We show that Markov Brains can incorporate these feedback gates in such a way that they do not rely on an external objective feedback signal, but instead can generate internal feedback that is then used to learn. This results in a more biologically accurate model of the evolution of learning, which will enable us to study the interplay between evolution and learning. (Abstract)

Silver, David, et al. Mastering the Game of Go without Human Knowledge. Nature. 550/354, 2017. An 18 member team (all male) from the Google’s DeepMind London artificial intelligence group including founder Demis Hassabis and AlphaGo European winner Fan Hui enhance the capabilities of their neural network learning programs. With regard to the second quote for the gist of the paper, these algorithmic, reinforcement methods appear as a microcosm of an ascendant, self-reinforcing evolutionary education as it may at last reach a consummate worldwise sapience. While we are wary of game metaphors, a vital truth could be gleaned. What am I trying to say – to wit that a universe to human quickening procreation seems like a game that plays itself. In regard, it may be the case that only one sentient ovoplanet is needed to achieve its self-observation, and realization, so as in this venue, “to log on to itself.” While life’s course is a long slog of stochastic chance, rife with injustice and tragedy, it is a game that yet can be won. As Great Earth, Natural Algorithms, Cosmo Opus and elsewhere try to evoke, our Geonate moment may give us an opportunity to be the fittest people and planet by virtue of a Cosmonate act of self-selection and continuance.

A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo. (Abstract)

Conclusion Humankind has accumulated Go knowledge from millions of games played over thousands of years, collectively distilled into patterns, proverbs and books. In the space of a few days, starting tabula rasa, AlphaGo Zero was able to rediscover much of this Go knowledge, as well as novel strategies that provide new insights into the oldest of games. (358)

Soltoggio, Andrea, et al. Born to Learn: the Inspiration, Progress, and Future of Evolved Plastic Artificial Neural Networks. Neural Networks. 108/48, 2018. Loughborough University, University of Central Florida and University of Copenhagen computer scientists draw upon the evolutionary and biological origins of this ubiquitous multicomplex learning system to achieve further understandings and usages. Their theme is that life’s temporal development seems to be a learning, neuromodulation, plasticity, and discovery progression. The approach is seen as akin to the Evolutionary Neurodynamics school of Richard Watson, et al, see section V.C. See also herein Evolution in Groups: A Deeper Look at Synaptic Cluster Driven Evolution of Deep Neural Networks (M. Shafiee) and other similar entries.

Biological neural networks are systems of extraordinary computational capabilities shaped by evolution, development, and lifetime learning. Inspired by such intricate natural phenomena, Evolved Plastic Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed plastic neural networks, artificial systems composed of sensors, outputs, and plastic components that change in response to sensory-output experiences in an environment. These systems may reveal key algorithmic ingredients of adaptation, autonomously discover novel adaptive algorithms, and lead to hypotheses on the emergence of biological adaptation. In particular, the limitations of hand-designed structures and algorithms currently used in most deep neural networks could be overcome by more flexible and innovative solutions. This paper brings together a variety of inspiring ideas that define the field of EPANNs. (Abstract)

Over the course of millions of years, evolution has led to the emergence of innumerable biological systems, and intelligence itself, crowned by the discovery of the human brain. Evolution, development, and learning are the fundamental processes that underpin biological intelligence. Thus, it is no surprise that scientists have modeled artificial systems to reproduce such phenomena. However, our current knowledge of evolution, biology, and neuroscience remains insufficient to provide clear guidance on the essential mechanisms that are key to the emergence of such complex systems. (1)

This paper frames the field that attempts to evolve plastic artificial neural networks, and introduces the acronym EPANN. EPANNs are evolved because parts of their design are determined by an evolutionary algorithm; they are plastic because they undergo various time-scale changes, beyond neural activity, while experiencing sensory-motor information streams during a lifetime simulation. The final capabilities of such networks are a result of genetic instructions, determined by evolution, that enable learning once the network is placed in an environment. Static ANNs with evolved connection weights are not considered EPANN. (1) Whilst the range of inspiring ideas is large and heterogeneous, the analysis in this review proposes that EPANNs build upon five main concepts: evolutionary processes, inspiration from biological neural networks, abstractions in brain simulations, artificial plastic neural networks, and intelligence-testing environments. (3)

Stanley, Kenneth, et al. Designing Neural Networks through Neuroevolution. Nature Machine Intelligence. January, 2019. Uber AI Labs, San Francisco researchers including Jeff Clune provide a tutorial to date for this active field to intentionally but respectfully facilitate external cognitive facilities. See also is this new journal and issue Evolving Embodies Intelligence from Materials to Machines by Davie Howard, et al.

Much of recent machine learning has focused on deep learning, in which neural network weights are trained through variants of stochastic gradient descent. An alternative approach comes from the field of neuroevolution, which harnesses evolutionary algorithms to optimize neural networks, inspired by the fact that natural brains themselves are the products of an evolutionary process. Neuroevolution enables important capabilities that are typically unavailable to gradient-based approaches, including learning neural network building blocks, hyperparameters, architectures and algorithms for learning itself. Neuroevolution differs deep reinforcement learning via a population of solutions during search, enabling exploration and parallelization. This Review looks at several key aspects of modern neuroevolution, including large-scale computing, the benefits of novelty and diversity, the power of indirect encoding, and the field’s contributions to meta-learning and architecture search. (Abstract excerpt)

Previous   1 | 2 | 3 | 4 | 5 | 6  Next