(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

II. Pedia Sapiens: A Planetary Progeny Comes to Her/His Own Actual Factual Knowledge

1. Earthificial Cumulative Cognizance: AI Large Language Models Learn Much Like a Child

Ruggeri, Azzurra, et al. Preschoolers search longer when there is more information to be gained. Developmental Science. 27/1, 2024. Senior psychologists AR, MPI Human Development, Oana Stanciu, Central European University, Madeline Pelz, MIT, Alison Gopnik, UC Berkeley and Eric Schulz, MPI Biological Cybernetics provide new insights into how children proactively seek and acquire knowledge and then recommend that the process would serve Large Language Models if it was written into its algorithms

What drives children to explore and learn when external rewards are uncertain or absent? We tested whether information gain itself acts as an internal reward and suffices to motivate children's actions. We measured 24–56-month-olds' behavior in a game where they had to search for an object with uncertainty about which specific object was hidden. We found that children were more persistent in their search when there was higher ambiguity and more information to be gained. Our results highlight the importance of artificial intelligence research to invest in curiosity-driven algorithms. (Abstract)

All in all, these findings consolidate our understandings of children’s motivation to learn and explore, and have strong implications for developmental psychology and artificial intelligence. The results are consistent with a theory of children’s exploration and learning driven by uncertainty reduction. From an artificial intelligence view, they lend further support to the idea that to build computational machines that learn like children, one should build curiosity-based systems and design algorithms motivated by the underlying expected IG (Intelligence gain) of their actions. (6)

Russell, Stuart. Human Compatible: Artificial Intelligence and the Problem of Control. New York: Penguin, 2020. A timely volume by the senior UC Berkeley computer scientist authority which is a basic guide for this disparate, frontier field. See his latest article If We Succeed in Daedulus for April 2022, along with a 2019 book and current articles by Melanie Mitchell.

Superhuman artificial intelligence is an tidal wave that threatens not just jobs and human relationships, but civilization itself. AI researcher Stuart Russell argues that this scenario can be avoided, but only if we rethink AI from the ground up. Russell begins by exploring the idea of intelligence in humans and in machines. He describes the near-term benefits we can expect, from intelligent personal assistants to accelerated scientific research. He also spells out the ways humans are already finding to misuse AI, from lethal autonomous weapons to viral sabotage. Russell suggests that we can rebuild AI on a new foundation, according to which machines are designed to be inherently uncertain about the human preferences they are required to satisfy. (Publisher)

Since its inception, AI has operated within a standard model whereby systems are designed to optimize a fixed, known objective. This model has been increasingly successful. I briefly summarize the state of the art and its likely evolution over the next decade. At the same time, the standard model will become progressively untenable in real-world applications because of the difficulty of specifying objectives completely and correctly. I propose a new model for AI development in which the machine’s uncertainty about the true objective leads to qualitatively new modes of behavior that are more robust, controllable, and deferential. (Article)

Schmidhuber, Jurgen. Deep Learning in Neural Networks: An Overview. Neural Networks. 61/2, 2015. A technical tutorial by the University of Lugano, Switzerland expert upon advances in artificial or machine learning techniques, based on how our own brains think. Sophisticated algorithms, multiple processing layers with complex structures, assignment paths, non-linear transformations, and so on are at work as they refer new experiences to prior representations for comparison. See also, for example, Semantics, Representations and Grammars for Deep Learning by David Balduzzi at arXiv:1509.08627. Our interest recalls recent proposals by Richard Watson, Eors Szathamary, et al to appreciate life’s evolution as quite akin to a neural net, connectionist learning process.

Schneider, Susan. Artificial You: AI and the Future of Your Mind. Princeton: Princeton University Press, 2019. The NASA/Baruch Blumberg Chair at the Library of Congress and cultural communicator provides an accessible, perceptive survey of these diverse algorithmic augmentations as they rush in to reinvent, empower and maybe imperil persons and societies. Of especial interest is the chapter A Universe of Singularities in a Postbiological Cosmos, whence it is assumed that something like the possible transfer (take over) by degrees from human beings (cyborgian) to myriad technological devices (Computocene) phase will have occurred by the billions across the galaxies. It is then contended that this occasion need be factored into exolife searches.

Schuchardt, Jan, et al.. Learning to Evolve. arXiv:1905.03389. Technical University of Munich informatics researchers advance ways to employ evolution-based algorithms which in turn shows how life’s long development can appear as a computational process. From our late vantage, it may seem that a cosmic genesis needs to pass on this genetic-like agency to our own continuance.

Evolution and learning are two of the fundamental mechanisms by which life adapts in order to survive and to transcend limitations. These biological phenomena inspired successful computational methods such as evolutionary algorithms and deep learning. Evolution relies on random mutations and on random genetic recombination. Here we show that learning to evolve, i.e. learning to mutate and recombine better than at random, improves the result of evolution in terms of fitness increase per generation and even in terms of attainable fitness. We use deep reinforcement learning to learn to dynamically adjust the strategy of evolutionary algorithms to varying circumstances. (Abstract)

Schuman, Catherine, et al. A Survey of Neuromorphic Computing and Neural Networks in Hardware. arXiv:1705.06963. Oak Ridge Laboratory and University of Tennessee researchers provide a copious review of progress from years of machine computation to this novel advance to artificially avail the way our own iterative brains so adeptly recognize shapes and patterns. The moniker Neuromorphic means how fast we can say cat or car.

Neuromorphic computing has come to refer to a variety of brain-inspired computers, devices, and models that contrast the pervasive von Neumann computer architecture. This biologically inspired approach has created highly connected synthetic neurons and synapses that can be used to model neuroscience theories as well as solve challenging machine learning problems. The promise of the technology is to create a brain-like ability to learn and adapt, but the technical challenges are significant, starting with an accurate neuroscience model of how the brain works, to finding materials and engineering breakthroughs to build devices to support these models, to creating a programming framework so the systems can learn, to creating applications with brain-like capabilities. In this work, we provide a comprehensive survey of the research and motivations for neuromorphic computing over its history.

Seif, Alireza, et al. Machine Learning the Thermodynamic Arrow of Time. Nature Physics. 17/1, 2022. We cite this entry by University of Maryland physicists including Chris Jarzynski as example of how these 2020s bio-based, neural net techniques which run iterative programmed computations can serve as an advanced spiral stage of worldwise scientific studies. In this case, the old arrow of time problem gains a new depth of understanding which was heretofore inaccessible.

The asymmetry in the flow of events that is expressed as “time’s arrow’” traces back to the second law of thermodynamics. In the microscopic regime, fluctuations prevent us from discerning the direction of time with certainty. Here, we find that a machine learning algorithm trained to infer an actual aim identifies entropy production as the relevant physical quantity in its decision-making process. The algorithm rediscovers the fluctuation theorem as the prime thermodynamic principle. Our results indicate that machine learning methods can be used to study out of equilibrium systems and begin to uncover deep physical principles. (Abstract)

Sejnowski, Terrence. The Deep Learning Revolution. Cambridge: MIT Press, 2018. The renown neuroscientist author has been at the innovative center of the AI computational machine to brain and behavior neural network advance since the 1980s. He recounts his national and worldwide experience with many collaborators in this volume which make it the best general introduction to the field. A great gift for any student, as the author has also been involved with learning how to learn methods for schools. The book is filled with vignettes of Francis Crick, Geoffrey Hinton, Stephen Wolfram, Barbara Oakley, John Hopfield, Sydney Brenner, Christof Koch and others across the years. An example of his interests and reach is as a speaker at the 2016 Grand Challenges in 21st Century Science (Google) in Singapore.

Terrence J. Sejnowski holds the Francis Crick Chair at the Salk Institute for Biological Studies and is a Distinguished Professor at the University of California, San Diego. He was a member of the advisory committee for the Obama administration's BRAIN initiative and is founding President of the Neural Information Processing (NIPS) Foundation. He has published twelve books, including (with Patricia Churchland) The Computational Brain (25th Anniversary Edition, MIT Press).

Sejnowski, Terrence. The Unreasonable Effectiveness of Deep Learning in Artificial Intelligence. Proceedings of the National Academy of Science. 117/30033, 2020. The senior Salk Institute neurobiologist introduces a Colloquium on the Science of Deep Learning as this AI neural net frontier goes rapidly forward. Some papers are Emergent Linguistic Structure in Artificial Neural Networks and Algorithms as Discrimination Detectors.

Deep learning networks have been trained to recognize speech, caption photographs, and translate text between languages. Although applications of deep learning networks to real-world problems have become ubiquitous, a deep understanding of why they are so effective lags behind. Paradoxes in their training and effectiveness are being investigated by way of the geometry of high-dimensional spaces. A mathematical theory would illuminate how they function, assess the strengths and weaknesses of network architectures, and more. (Abstract excerpt)

Senior, Andrew, et al. Improved Protein Structure Prediction using Potentials from Deep Learning. Nature. 577/706, 2020. Nineteen DeepMind London and University College London researchers including Demis Hassabis describe novel algorithms which are able to study, predict, and create life’s primary, variegated, multi-purpose biomolecule. Dubbed AlphaFold, the advance is touted as a good example of how Artificial Intelligence AL can be of increasing value and utility. A commentary in the same issue is Protein Structure Prediction Gets Real by Mohammed AlQuraraishi (627).

Shafiee, Mohammad, et al. Evolution in Groups: A Deeper Look at Synaptic Cluster Driven Evolution of Deep Neural Networks. arXiv:1704.02081. Shafiee and Elnaz Barshan, Iranian-Canadian, University of Waterloo, systems engineers, and Alexander Wong, DarwinAI (Waterloo), advance this frontier of a universe to human interpretation via multiplex computational cognitive dynamics. Their novel insight is a biological evolutionary setting by way of a deeper genetic and cerebral intelligence. Nature’s emergent profusion of cerebral nets are then alluded to as a generative offspring. See also by the authors these prior award-winning papers Evolutionary Synthesis of Deep Neural Networks via Synoptic Cluster-driven Genetic Encoding at 1609.01360, and Deep Learning with Darwin at 1606.04393.

A promising paradigm for achieving highly efficient deep neural networks is the idea of evolutionary deep intelligence, which mimics biological evolution processes to progressively synthesize more efficient networks. A crucial design factor in evolutionary deep intelligence is the genetic encoding scheme used to simulate heredity and determine the architectures of offspring networks. In this study, we take a deeper look at the notion of synaptic cluster-driven evolution of deep neural networks which guides the evolution process towards the formation of a highly sparse set of synaptic clusters in offspring networks. Utilizing a synaptic cluster-driven genetic encoding, the probabilistic encoding of synaptic traits considers not only individual synaptic properties but also inter-synaptic relationships within a deep neural network. (Abstract)

Shallue, Christopher and Andrew Vanderburg. Identifying Exoplanets with Deep Learning. arXiv:1712.05044. With A Five Planet Resonant Chain Around Kepler-80 and an Eighth Planet Around Kepler-90 subtitle, a Google Brain software engineer and a UT Austin astronomer report a novel application of artificial machine intelligence to successfully analyze huge data inputs from this planet-finder satellite. The achievement received wide press notice, along with a December 14 conference: NASA and Google to Announce AI Breakthrough.

NASA's Kepler Space Telescope was designed to determine the frequency of Earth-sized planets orbiting Sun-like stars, but these planets are on the very edge of the mission's detection sensitivity. Accurately determining the occurrence rate of these planets will require automatically and accurately assessing the likelihood that individual candidates are indeed planets, even at low signal-to-noise ratios. We present a method for classifying potential planet signals using deep learning, a class of machine learning algorithms that have recently become state-of-the-art in a wide variety of tasks. We train a deep convolutional neural network to predict whether a given signal is a transiting exoplanet or a false positive caused by astrophysical or instrumental phenomena.

We apply our model to a new set of candidate signals that we identified in a search of known Kepler multi-planet systems. We statistically validate two new planets that are identified with high confidence by our model. One of these planets is part of a five-planet resonant chain around Kepler-80, with an orbital period closely matching the prediction by three-body Laplace relations. The other planet orbits Kepler-90, a star which was previously known to host seven transiting planets. Our discovery of an eighth planet brings Kepler-90 into a tie with our Sun as the star known to host the most planets. (Abstract)

Previous   1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10  Next