(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

II. Pedia Sapiens: A Planetary Progeny Comes to Her/His Own Actual Factual Twintelligent Knowledge

1. Earthificial Cumulative Cognizance: AI Large Language Models Learn Much Like a Child

Pedreschi, Dino, et al. Social AI and the Challenges of the Human-AI Ecosystem. arXiv:2306.13723. Sixteen authorities from Italy, Greece, Sweden, New Zealand, Germany, the UK and USA including Albert Barabasi, Sandy Pentland and Alessandro Vespignani post an urgent call for a thorough effort to rein in and get in front of this sudden computational prowess that is bursting upon us. But as Eric Schmidt said on TV in August, the problem is that we lack any philosophic basis as a guide to what is going on. As a main premise of this website, we could suggest the phenomenal semblance of an emergent global sapiensphere brain and its own accumulated knowledge.

Large-scale socio-technical systems in which humans interact with artificial intelligence (AI) often leads to social phenomena and tipping points with unexpected, and unintended consequences. As a positive benefit, we may learn how to foster the "wisdom of crowds" and collective actions to face public and environmental challenges. In order to understand these effects and issues, next-generation AIs that team with humans to help overcome problems rather than exacerbate, we propose a Foundations of Social AI project that joins Complex Systems, Network Science and AI. In this perspective paper, we discuss relevant questions, outline technical and scientific challenges and suggest research agendas. (Abstract)

Pfau, David, et al. Accurate computation of quantum excited states with neural networks. Science. Vol. 385/Iss. 6711, 2024. We cite this paper by Google DeepMind, London computational scientists as an example of how AI neural net procedures are being readily applied to quantum phenomena, which in turn implies that this fundamental realm has an innate, analytic affinity with cerebral structures and facilities. See also Understanding quantum machine learning also requires rethinking generalization by Elies Gil-Fuster, et al in Nature Communications (15/2277, 2024) for another instance.

xcited states are important in many areas of physics and chemistry; however, scalable, accurate, and robust calculations of their properties from first principles remain al theoretical challenge. Recent advances in computing molecular systems driven by deep learning show much promise. Pfau et al. present a parameter-free mathematics by directly generalizing variational quantum Monte Carlo to their ground states. The proposed method achieves accurate excited-state calculations on a number of atoms and molecule, and can be applied to various quantum systems. (Editor Summary)

Ruggeri, Azzurra, et al. Preschoolers search longer when there is more information to be gained. Developmental Science. 27/1, 2024. Senior psychologists AR, MPI Human Development, Oana Stanciu, Central European University, Madeline Pelz, MIT, Alison Gopnik, UC Berkeley and Eric Schulz, MPI Biological Cybernetics provide new insights into how children proactively seek and acquire knowledge and then recommend that the process would serve Large Language Models if it was written into its algorithms

What drives children to explore and learn when external rewards are uncertain or absent? We tested whether information gain itself acts as an internal reward and suffices to motivate children's actions. We measured 24–56-month-olds' behavior in a game where they had to search for an object with uncertainty about which specific object was hidden. We found that children were more persistent in their search when there was higher ambiguity and more information to be gained. Our results highlight the importance of artificial intelligence research to invest in curiosity-driven algorithms. (Abstract)

All in all, these findings consolidate our understandings of children’s motivation to learn and explore, and have strong implications for developmental psychology and artificial intelligence. The results are consistent with a theory of children’s exploration and learning driven by uncertainty reduction. From an artificial intelligence view, they lend further support to the idea that to build computational machines that learn like children, one should build curiosity-based systems and design algorithms motivated by the underlying expected IG (Intelligence gain) of their actions. (6)

Russell, Stuart. Human Compatible: Artificial Intelligence and the Problem of Control. New York: Penguin, 2020. A timely volume by the senior UC Berkeley computer scientist authority which is a basic guide for this disparate, frontier field. See his latest article If We Succeed in Daedulus for April 2022, along with a 2019 book and current articles by Melanie Mitchell.

Superhuman artificial intelligence is an tidal wave that threatens not just jobs and human relationships, but civilization itself. AI researcher Stuart Russell argues that this scenario can be avoided, but only if we rethink AI from the ground up. Russell begins by exploring the idea of intelligence in humans and in machines. He describes the near-term benefits we can expect, from intelligent personal assistants to accelerated scientific research. He also spells out the ways humans are already finding to misuse AI, from lethal autonomous weapons to viral sabotage. Russell suggests that we can rebuild AI on a new foundation, according to which machines are designed to be inherently uncertain about the human preferences they are required to satisfy. (Publisher)

Since its inception, AI has operated within a standard model whereby systems are designed to optimize a fixed, known objective. This model has been increasingly successful. I briefly summarize the state of the art and its likely evolution over the next decade. At the same time, the standard model will become progressively untenable in real-world applications because of the difficulty of specifying objectives completely and correctly. I propose a new model for AI development in which the machine’s uncertainty about the true objective leads to qualitatively new modes of behavior that are more robust, controllable, and deferential. (Article)

Scheurer, Christoph and Karsten Reuter. Role of the human-in-the-loop in emerging self-driving laboratories for heterogeneous catalysis.. Nature Catalysis. January 29, 2025. We cite this entry by Max Planck Institute researchers as an example of new realizations that AI machinations ought not to be turned loose to run on their own. It After some thirty months of ChatGPTs, a constant reciprocity of AI inputs and human management is seen to achieve a best balance in ethical co-generative applications

Self-driving laboratories (SDLs) represent a convergence of machine learning with laboratory automation which operate in active learning situations as algorithms plan experiments that are carried out by automated (robotic) modules. Here we argue against humans totally out of the loop. We instead conclude that crucial advances will come from fast proxy experiments, existing apparatus with real persons making continuous decision-making. (Excerpt)

Schmidhuber, Jurgen. Deep Learning in Neural Networks: An Overview. Neural Networks. 61/2, 2015. A technical tutorial by the University of Lugano, Switzerland expert upon advances in artificial or machine learning techniques, based on how our own brains think. Sophisticated algorithms, multiple processing layers with complex structures, assignment paths, non-linear transformations, and so on are at work as they refer new experiences to prior representations for comparison. See also, for example, Semantics, Representations and Grammars for Deep Learning by David Balduzzi at arXiv:1509.08627. Our interest recalls recent proposals by Richard Watson, Eors Szathamary, et al to appreciate life’s evolution as quite akin to a neural net, connectionist learning process.

Schneider, Susan. Artificial You: AI and the Future of Your Mind. Princeton: Princeton University Press, 2019. The NASA/Baruch Blumberg Chair at the Library of Congress and cultural communicator provides an accessible, perceptive survey of these diverse algorithmic augmentations as they rush in to reinvent, empower and maybe imperil persons and societies. Of especial interest is the chapter A Universe of Singularities in a Postbiological Cosmos, whence it is assumed that something like the possible transfer (take over) by degrees from human beings (cyborgian) to myriad technological devices (Computocene) phase will have occurred by the billions across the galaxies. It is then contended that this occasion need be factored into exolife searches.

Schuchardt, Jan, et al.. Learning to Evolve. arXiv:1905.03389. Technical University of Munich informatics researchers advance ways to employ evolution-based algorithms which in turn shows how life’s long development can appear as a computational process. From our late vantage, it may seem that a cosmic genesis needs to pass on this genetic-like agency to our own continuance.

Evolution and learning are two of the fundamental mechanisms by which life adapts in order to survive and to transcend limitations. These biological phenomena inspired successful computational methods such as evolutionary algorithms and deep learning. Evolution relies on random mutations and on random genetic recombination. Here we show that learning to evolve, i.e. learning to mutate and recombine better than at random, improves the result of evolution in terms of fitness increase per generation and even in terms of attainable fitness. We use deep reinforcement learning to learn to dynamically adjust the strategy of evolutionary algorithms to varying circumstances. (Abstract)

Schuman, Catherine, et al. A Survey of Neuromorphic Computing and Neural Networks in Hardware. arXiv:1705.06963. Oak Ridge Laboratory and University of Tennessee researchers provide a copious review of progress from years of machine computation to this novel advance to artificially avail the way our own iterative brains so adeptly recognize shapes and patterns. The moniker Neuromorphic means how fast we can say cat or car.

Neuromorphic computing has come to refer to a variety of brain-inspired computers, devices, and models that contrast the pervasive von Neumann computer architecture. This biologically inspired approach has created highly connected synthetic neurons and synapses that can be used to model neuroscience theories as well as solve challenging machine learning problems. The promise of the technology is to create a brain-like ability to learn and adapt, but the technical challenges are significant, starting with an accurate neuroscience model of how the brain works, to finding materials and engineering breakthroughs to build devices to support these models, to creating a programming framework so the systems can learn, to creating applications with brain-like capabilities. In this work, we provide a comprehensive survey of the research and motivations for neuromorphic computing over its history.

Seif, Alireza, et al. Machine Learning the Thermodynamic Arrow of Time. Nature Physics. 17/1, 2022. We cite this entry by University of Maryland physicists including Chris Jarzynski as example of how these 2020s bio-based, neural net techniques which run iterative programmed computations can serve as an advanced spiral stage of worldwise scientific studies. In this case, the old arrow of time problem gains a new depth of understanding which was heretofore inaccessible.

The asymmetry in the flow of events that is expressed as “time’s arrow’” traces back to the second law of thermodynamics. In the microscopic regime, fluctuations prevent us from discerning the direction of time with certainty. Here, we find that a machine learning algorithm trained to infer an actual aim identifies entropy production as the relevant physical quantity in its decision-making process. The algorithm rediscovers the fluctuation theorem as the prime thermodynamic principle. Our results indicate that machine learning methods can be used to study out of equilibrium systems and begin to uncover deep physical principles. (Abstract)

Sejnowski, Terrence. The Deep Learning Revolution. Cambridge: MIT Press, 2018. The renown neuroscientist author has been at the innovative center of the AI computational machine to brain and behavior neural network advance since the 1980s. He recounts his national and worldwide experience with many collaborators in this volume which make it the best general introduction to the field. A great gift for any student, as the author has also been involved with learning how to learn methods for schools. The book is filled with vignettes of Francis Crick, Geoffrey Hinton, Stephen Wolfram, Barbara Oakley, John Hopfield, Sydney Brenner, Christof Koch and others across the years. An example of his interests and reach is as a speaker at the 2016 Grand Challenges in 21st Century Science (Google) in Singapore.

Terrence J. Sejnowski holds the Francis Crick Chair at the Salk Institute for Biological Studies and is a Distinguished Professor at the University of California, San Diego. He was a member of the advisory committee for the Obama administration's BRAIN initiative and is founding President of the Neural Information Processing (NIPS) Foundation. He has published twelve books, including (with Patricia Churchland) The Computational Brain (25th Anniversary Edition, MIT Press).

Sejnowski, Terrence. The Unreasonable Effectiveness of Deep Learning in Artificial Intelligence. Proceedings of the National Academy of Science. 117/30033, 2020. The senior Salk Institute neurobiologist introduces a Colloquium on the Science of Deep Learning as this AI neural net frontier goes rapidly forward. Some papers are Emergent Linguistic Structure in Artificial Neural Networks and Algorithms as Discrimination Detectors.

Deep learning networks have been trained to recognize speech, caption photographs, and translate text between languages. Although applications of deep learning networks to real-world problems have become ubiquitous, a deep understanding of why they are so effective lags behind. Paradoxes in their training and effectiveness are being investigated by way of the geometry of high-dimensional spaces. A mathematical theory would illuminate how they function, assess the strengths and weaknesses of network architectures, and more. (Abstract excerpt)

Previous   1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10  Next  [More Pages]