(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

II. Pedia Sapiens: A Planetary Progeny Comes to Her/His Own Twintelligent Gaiable Knowledge

1. Earthificial Cumulative Cognizance: AI Large Language Models Learn Much Like a Child

Hassabis, Demis, et al. Neuroscience-Inspired Artificial Intelligence. Neuron. 95/2, 2017. We note this entry because the lead author is the 2010 founder of DeepMind, a premier AI enterprise based in London, which was purchased in 2014 by Google for over $500 million. It is a broad survey of the past, present, and future of this brain-based endeavor guided by advances in how cerebral network dynamics are composed, think, and actively learn.

The successful transfer of insights gained from neuroscience to the development of AI algorithms is critically dependent on the interaction between researchers working in both these fields, with insights often developing through a continual handing back and forth of ideas between fields. In the future, we hope that greater collaboration between researchers in neuroscience and AI, and the identification of a common language between the two fields, will permit a virtuous circle whereby research is accelerated through shared theoretical insights and common empirical advances. We believe that the quest to develop AI will ultimately also lead to a better understanding of our own minds and thought processes. (Conclusion, 255)

Hayakawa, Takashi and Toshio Aoyagi. Learning in Neural Networks Based on a Generalized Fluctuation Theorem. arXiv:1504.03132. Reported more in Universality Affirmations, as the Abstract details, Kyoto University researchers join systems physics and neuroscience to reveal a persistence of universally recurring phenomena, under a rubric of fluctuation theories.

Higgins, Irina, et al. Symmetry-Based Representations for Artificial and Biological General Intelligence. arXiv:2203.09250. DeepMind, London really intelligent persons IH, Sebastien Racaniere and Danilo Rezende scope out ways that an intersect of computational frontiers with neuroscience studies can benefit each field going forward. Once again, an Earthificial realm becomes more brain-like as human beings may first program so that the algorithms can process their appointed tasks (if all goes to plan) and come up with vital contributions on their own.

Biological intelligence is remarkable in its ability to produce complex behaviour in diverse situations. An ability to learn sensory representations is a vital need, however there is little agreement as to what a good representation should look like. In this review we contend argue that symmetry transformations are a main principle. The idea these phenomena affect some aspects of a system but not others, has become central in modern physics. Recently, symmetries have gained prominence in machine learning (ML) by way of more data efficient and generic algorithms that mimic complex behaviors. Taken together, these symmetrical effects suggest a natural framework that determines the structure of the universe and consequently shapes both biological and artificial intelligences. (Abstract excerpt)

Johnson, Samuel, et al. Imagining and building wise machines: The centrality of AI metacognition. arXiv:2411.02478.. arXiv:2411.02478.. Eleven senior computer scientists at the University of Waterloo, University of Montreal, Stanford University, Allen Institute for Artificial Intelligence, Santa Fe Institute, MPI Human Development and MPI Intelligent Systems including Yoshua Bengio, Nick Chater and Melanie Mitchell join a current project to get ahead of and rein in this worldwide computational transition. As foundation and large language models, along with agentic behaviors, become understood and availed, it is vital to have a lead segment of informed human management through appropriate prompts, select data resources, proper algorithms and so on. See, for example, Role of the human-in-the-loop in emerging self-driving laboratories for heterogeneous catalysing by Christoph Scheurer and Karsten Reuter in Nature Catalysis (January 2025). As we work through this critical phase, a beneficial balance of people in ethical charge, along with allowing agents to run pattern finding programs, could be a resolve.

While advances in artificial intelligence (AI) have shown to be capable of sophisticated performance on cognitive tasks, AI systems struggle in critical ways: unpredictable and novel environments (robustness), their reasoning (explainability), communication and commitment (cooperation), and harmful risks (safety). We argue that these issues stem from one basic lapse: AI systems lack wisdom. Drawing from philosophic mores, we define wisdom as the ability to navigate ambiguous, novel, chaotic problems through metacognitive strategies. Prioritizing metacognition in AI research will lead to systems that act not only intelligently but also wisely in complex, real-world situations. (Excerpts)

MPI Intelligent Systems Our goal is to understand the principles of Perception, Action and Learning that interact with complex environments. The Institute studies these aspects in biological, computational, hybrid, and material systems from nano to macro scales. The Physics for Inference and Optimization Group focuses on relations between the microscopic and macroscopic complex interactive networks by algorithms based on statistical physics.

Kelleher, John. Deep Learning. Cambridge: MIT Press, 2019. John D. Kelleher is Academic Leader of the Information, Communication, and Entertainment Research Institute at the Technological University Dublin provides another up-to-date survey that is wide-ranging in scope along with in depth examples.

In the MIT Press Essential Knowledge series, computer scientist John Kelleher offers an accessible, concise and comprehensive introduction to the artificial intelligence revolution and its techniques. He explains, for example, how deep learning enables data-driven decisions by identifying and extracting patterns from large datasets, how it learns from large, complex data sets and much more. He describes important deep learning architectures such as autoencoders, recurrent neural networks, as well as such recent developments as Generative Adversarial Networks.

Kitano, Hiroaki. Nobel Turing Challenge: Creating the Engine for Scientific Discovery. NPJ Systems Biology. 7/29, 2021. A leading Japanese executive scientist who directs its Systems Biology Institute outlines a comprehensive, insightful project as it becomes more evident that AI computational algorithmic capacities, if properly informed and trained, can proceed to run programs, process data, iterate, and optimize research studies on their own. As since this frontier now involves many worldwise collaborations, as the spiral turns maybe a new collective group “Global Prize” in recognition would be appropriate. And this time it should include the missing life and mind sciences. See also herein entries by Charlie Wood as this Earthuman acumen gains momentum.

Scientific discovery has long been one of the central driving forces in our civilization. It uncovered the principles of the world we live in, and enabled us to invent new technologies reshaping our society, cure diseases, explore unknown new frontiers, and hopefully lead us to build a sustainable society. In this regard, we propose an overall “science of science” to guide and boost going forward. A prime facility into these 2020s thus need be a viable integration of artificial intelligence (AI) system. We are aware that the contributions of “AI Scientists” may not resemble human science but deep hybrid-AI methods could take us our cognitive limitations and sociological constraints. (Excerpt)

Knight, Will. The Dark Secret at the Heart of AI. MIT Technology Review. 120/3, 2017. A senior editor worries that No one knows how the most advanced algorithms do what they do. In this section, we also want to record such reality checks so that as this computational prowess burst upon us, it remains within human control and service, rather than taking over. See also herein Women vs. the Machine by Erika Hayasaki for another dilemma.

Kozma, Robert, et al. Artificial Intelligence in the Age of Neural Networks and Brain Computing. Cambridge, MA: Academic Press, 2018. A large international edition with 15 authoritative chapters from The New AI: Basic Concepts and Urgent Risks to Evolving Deep Neural Networks. See especially A Half Century of Progress toward a Unified Neural Theory of Mind and Brain by Stephen Grossberg (search).

Krenn, Mario, et al. On Scientific Understanding with Artificial Intelligence. arXiv:2204.01467. Twelve scholars in Germany, Canada, the USA, and China including Alan Aspuru-Guzik post a wide ranging survey as an early effort to try to understand, orient, enhance and benefit from this imminent worldwise, computational transition. But the historic occasion of some cerebral, machine neural deep learning, cognizance going on by itself is such a revolutionary presence with many issues and quandaries. The second quote might give some idea. Thus we repurpose, expand and rename an Earthificial Intelligence: Deep Neural Network Computation Planetary Science section. See also Powerful ”Machine Scientists” Distill the Laws of Physics from Raw Data by Charlie Wood in Quanta (May 10, 2022) for another array of novel paths.

Imagine an “oracle” that predicts the outcome of a particle physics experiment, the products of a chemical reaction, or the function of every protein. As scientists, we would not be satisfied, for we need to comprehend how these predictions were conceived. This feat of scientific understanding, has long been the essential aim of science. Now, the ever-growing power of computers and AI poses this question: But today we ask how can advanced computer systems contribute to learning and discovery. At this early phase we seek advice from the philosophy of science, review the state of the art, and ask current researchers about how they acquired novel findings this way. We hope our perspective inspires and focuses research towards devices and methods that foster and empower this worldwide facility. (Abstract excerpt, edit)

Three Dimensions of Computer-Assisted Understanding: We use scientific literature and personal anecdotes of many active users, and the philosophy of science, to introduce a new classification of android contributions to scientific understanding. Such entities can act I) as a computational microscope, providing information not (yet) attainable by experiment, II) as a resource of inspiration or artificial muse,. In those two classes, the human investigator is essential to develop new insights to their full potential Finally, an android can be III) an agent of understanding by generalizing observations and finding novel scientific concepts. (4)

Kriegeskorte, Nikolaus and Tai Golan. Neural Network Models and Deep Learning: A Primer for Biologists. arXiv:1902.04704. Columbia University neuroscientists provide a 14 page primer which would be a good entry for any field. Some sections are Neural nets are universal approximators, Deep networks can capture complex functions, and Deep learning by backpropagation.

Originally inspired by neurobiology, deep neural network models have become a powerful tool of machine learning and artificial intelligence, where they are used to approximate functions and dynamics by learning from examples. Here we give a brief introduction to neural network models and deep learning for biologists. We introduce feedforward and recurrent networks and explain the expressive power of this modeling framework and the backpropagation algorithm for setting the parameters. Finally, we consider how deep neural networks might help us understand the brain's computations. (Abstract)

Kumar, Akarsh, et al. Automating the Search for Artificial Life with Foundation Models. .. arXiv:2412.17799. MIT, Sakana AI, OpenA, and Swiss AI Lab IDSIA computational imagineers describe their frontier excursions as novel approaches to juice the A Life endeavor to see how it can respectfully and beneficially open frontier pathways. See also Automating the Search for Artificial Life with Foundation Models at pub.sakana.ai/asal for a companion paper.

With the recent Nobel Prize awarded for radical advances in protein discovery, foundation models (FMs) for exploring large combinatorial spaces promise to revolutionize many scientific fields. This paper presents a successful realization using vision-language FMs called Automated Search for Artificial Life (ASAL), finds generalities across a diverse range of ALife substrates including Boids, Particle Life, Game of Life, Lenia, and Neural Cellular Automata. This new paradigm promises to accelerate ALife research beyond what is possible through human ingenuity alone. (Excerpt)

A foundation model is a deep machine learning method trained on vast datasets so it can be applied across a wide range of use cases. Early examples are language models (LMs) like OpenAI's GPT. Foundation models are also being developed for fields like astronomy, radiology, genomics, mathematics, and chemistry.

Levi, DeHaan. LLM as Child Analogy.. levidehaan.com.. This is a posting by a veteran cyber designer whose group projects can be found on the above site. A longer title is Theorem: Evolutionary Pathway of LLMs under the "LLM as Child Analogy" which may be reached by Google keywords. We cite some excerpts which are similar to Marina Pantcheva’s views herein.

LLMs: A machine learning model designed to process and generate human-like text based on statistical patterns in data. Real-time Adaptability: The ability to modify its behavior based on new information. Memory Retrieval: A system to store, recall, and utilize past interactions. Decision-making Algorithm: A set of rules to make choices. Creative Reasoning: The capability to generate original content but not confined to it.

Children possess real-time adaptability, have a memory retrieval system. develop decision-making abilities and the capability for creative reasoning. If LLMs are to evolve along the lines of children, then the first logical step would be to implement real-time learning algorithms, moving from static to dynamic models. For LLMs to be more analogous to children, they would need the ability to generate new, original content, potentially through some form of creative reasoning. Achieving real-time adaptability would make LLMs dynamic learners, thereby aligning with human children.

Previous   1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10  Next  [More Pages]