(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

II. Pedia Sapiens: A Planetary Progeny Comes to Her/His Own Actual Factual Twintelligent Knowledge

1. Earthificial Cumulative Cognizance: AI Large Language Models Learn Much Like a Child

Bundy, Alan, et al. Introduction to Cognitive Artificial Intelligence.. Philosophical Transactions A. June, 2023. AB, University of Edinburgh, Nick Chater, University of Warwick and Stephen Muggleton, Imperial College London, (A generic attribution seems to be computational bioinformatics) introduce this special issue from a June 2022 Royal Society Hooke Meeting about ways to integrate these closely aligned fields for social benefit, since AI might take off wildly on its own. But note that last year was still a pre Chat Bot stage. Typical authoritative entries among many are Representational change is integral to reasoning, Socially intelligent machines that learn from humans and help humans learn, and Emotion prediction as computation over a generative theory of mind.

This theme issue discusses current progress in making artificial intelligence systems think like humans. Many papers argue that despite the amazing results achieved by recent machine learning systems such as Chat GPT and Dall-E, enhancing them with human-like aware acumen will require major developments. Our interest is abilities to reason more like people, and support genuinely social interaction for machines to work alongside humans. The papers in this theme issue bring together the latest AI advances and related research from the cognitive sciences. These issues are crucial, at a time when AI is having an impact in many areas of society. (2023 Overview)

Carleo, Giuseppe and Mathias Troyer. Solving the Quantum Many-Body Problem with Artificial Neural Networks. Science. 355/602, 2017. As the Abstracts notes, ETH Zurich physicists find this generic iterative approach, as it gains utility in many areas, to be an apt method for dealing with and solving such seemingly intractable phenomena. The work merited a report in the same issue as Machine Learning for Quantum Physics (355/580).

The challenge posed by the many-body problem in quantum physics originates from the difficulty of describing the nontrivial correlations encoded in the exponential complexity of the many-body wave function. Here we demonstrate that systematic machine learning of the wave function can reduce this complexity to a tractable computational form for some notable cases of physical interest. We introduce a variational representation of quantum states based on artificial neural networks with a variable number of hidden neurons. A reinforcement-learning scheme we demonstrate is capable of both finding the ground state and describing the unitary time evolution of complex interacting quantum systems. (Abstract)

Carleo, Giuseppe, et al. Machine Learning and the Physical Sciences. arXiv:1903.10563. An eight member international teamwith postings such as Flatiron Institute Center for Computational Quantum Physics (GC), MPI Quantum Optics (Ignacio Cirac) and Maria Schuld (University of KwaZulu-Natal) consider applications of novel deep neural net methods, broadly conceived, across statistical, particle, cosmic, many-body quantum matter, and onto chemical phases. See also NetKet: A Machine Learning Toolkit for Many-Body Quantum Systems at 1904.00031, and Neural Networks take on Open Quantum Systems in Physics Review Letters (122/25, 2019) by this extended group. As the project flourishes, by ready cross-transfers, one gets an inkling of a naturally cerebral ecosmos, just now trying to achieve via reinforcement learnings its own self-description, literacy, realization, and affirmative action going forward.

Machine learning encompasses a broad range of algorithms and modeling tools used for a vast array of data processing tasks, which has entered most scientific disciplines in recent years. We review in a selective way the recent research on the interface between machine learning and physical sciences. This includes conceptual developments in machine learning (ML) motivated by physical insights, applications of machine learning techniques to several domains in physics, and cross-fertilization between the two fields. After giving basic notion of machine learning methods and principles, we describe examples of how statistical physics is used to understand methods in ML. We then move to describe applications of ML methods in particle physics and cosmology, quantum many body physics, quantum computing, and chemical and material physics. We also highlight research and development into novel computing architectures aimed at accelerating ML. In each of the sections we describe recent successes as well as domain-specific methodology and challenges. (Abstract)

Chantada, Augusto, et al. Cosmological Informed Neural Networks to Solve the Background Dynamics of the Universe. arXiv:2205.02945. We cite this entry by five astro-analysts from Argentina and Harvard as an example of how 2020s AI (EI) techniques can achieve a epic advance (quantum leap) in analytic prowess as our collective Earthuman proceeds apace with this apparent task of ecosmic self-description. See also Stellar Mass and Radius Estimation using Artificial Intelligence by Andy Moya and R. Lopez-Sastre at 2203.06027, and What a neural network model learns about Cosmic Structure Formation by Drew Jamieson, et al at (2206.04573) for more usages.

The field of machine learning has drawn increasing interest due to its ability to solve many different problems. In this work, we train artificial neural networks to represent differential equations that govern the background dynamics of the Universe. We chose four models to study: ΛCDM, parametric dark energy, quintessence and the Hu-Sawicki f(R) model. We performed statistical analyses to estimate each model's parameters by observational data. We found that the error of the solutions was ∼1% in the region of the parameter space. (Excerpt)

Chen, Boyuan, et al. Discovering State Variables Hidden In Experimental Data. arXiv:2112.10755. This entry by Columbia University computer scientists led by Hod Lipson offers a good survey of how this computational endeavor began and goes forth today. It opens by noting historic studies of physical laws and motions as a search for elusive values. From 2021, it is advised that as not before bovel AI methods can achieve deeper analyses so as to discern their presence in dynamic systems such as reaction-diffusion. See also Distilling Free-Form Natural Laws from Experimental Data by Michael Schmidt and Hod Lipson in Science (324/5923, 2009, second Abstract).

All Physical laws are based on relationships between state variables which give a description of the relevant system dynamics. However, the process of identifying the hidden state variables has so far resisted AI techniques. We propose a new principle to find how many state variables an observed system is likely to have, and what these variables might be. Without any prior knowledge of the underlying physics, our algorithm discovers the intrinsic dimension of the observed dynamics and identifies sets of state variables. We suggest that this approach could help catalyze the understanding, prediction and control of increasingly complex systems. (Excerpt)

For centuries, scientists have attempted to identify and document analytical laws that underlie physical phenomena. Despite much computing power, the process of finding natural laws and their equations has resisted automation. We need to define an algorithm which can insightfully correlate observed data sets. Without prior knowledge about physics, kinematics, or geometry, our algorithm discovered Hamiltonians, Lagrangians, and momentum conservation.. (2009 Abstract)

Ching, Travers, et al. Opportunities and Obstacles for Deep Learning in Biology and Medicine. Journal of the Royal Society Interface. Vol. 14/Iss. 141, 2018. Some 40 researchers from institutes, laboratories and hospitals in the USA, Canada and the UK well survey current applications, potentials, and problems for this cerebral-based AI revolution. Or as Siddhartha Muckerjee, MD wrote in the New Yorker last year: “the algorithm will see you now.”

Some 40 researchers from institutes, laboratories and hospitals in the USA, Canada and the UK well survey current applications, potentials, and problems for this cerebral-based AI revolution. Or as Siddhartha Muckerjee, MD wrote in the New Yorker last year: “the algorithm will see you now.”

Ciliberto, Carlo, et al. Quantum Machine Learning. Proceedings of the Royal Society A. 474/0551, 2017. University College London and MPI Intelligent Systems researchers provide a state of the science and art as the AI revolution by way of its novel biological, neural net basis becomes widely applicable. Here quantum phenomena, as it becomes affine with classical macro-modes, seems to bode for a cosmic connectome.

Collins, Katherine, et al. Building Machines that Learn and Think with People.. arXiv:2408.03943.. Thirteen concerned scholars at University, Princeton, NYU, Alan Turing Institute, MIT and Microsoft Research including Umang Bhatt, Mina Lee and Thomas Griffiths enter a latest proposal and plan toward a considerate, reciprocal assimilation of personal discourse with more amenable computational resources.

What do we want from machine intelligence? We envision machines that are not just tools for thought, but partners in thought: reasonable, insightful, knowledgeable, reliable, and trustworthy systems that think with us. In this Perspective, we show how the science of collaborative cognition can be put to work to engineer systems that really can be called “thought partners.'' Drawing on motifs from computational cognitive science, we motivate an alternative scaling path through a Bayesian lens, whereby the partners we actively build and reason over models of the human and world. (Excerpt)

Cranmer, Miles, et al. Discovering Symbolic Models from Deep Learning with Inductive Biases. arXiv;2006.11287. Seven Princeton U., Deep Mind, London, NYU, and Flatiron Institute, NYC computer specialists articulate yet another effective machine procedure as our learning (and hopefully thinking) planet begins to spiral up to a prodigious Earthropic sapiens phase.

We develop a general approach to distill symbolic representations of a learned deep model by introducing strong inductive biases. We focus on Graph Neural Networks (GNNs) that encourage sparse latent representations an apply symbolic regression to learned model components to extract physical relations. We go on to study a cosmology sample of detailed dark matter and are discover a analytic formula that can predict the concentration of dark matter from the mass distribution of nearby cosmic structures. Our approach offers new ways to interpret neural networks and revealing physical principles from their representations. (Abstract)

Cusack, Rhodri, et al. Helpless infants are learning a foundation model. Trends in Cognitive Sciences. 28/8, 2024. We refer to this contribution by Trinity College Dublin, Google DeepMind, London, and Auburn University neuropsychologists including Christine Charvet for latest views of the first three months neonatal to infant phase but also for its notice of a comparative affinity with how Artificial Intelligence language methods seem to be processed and learn. This section now contains several similar views which then provide an empirical basis for an actual pediakind sapience.

Humans have a protracted postnatal period, attributed to human-specific maternal constraints which cause an early birth when the brain is highly immature. By aligning neurodevelopmental events across species, however, it has been found that humans are not born with underdeveloped brains compared with animal species with a shorter helpless period. Consistent with this, the advancing field of infant neuroimaging has found that brain connectivity and functional activation at birth share many similarities with the mature brain. As a parallel approach, we consider deep neural network machine learning which also benefits from a ‘helpless period’ of pre-training. As a result, we propose that human infants are forming a foundational set of vital representations in preparation for later cognitive abilities with high performance and rapid generalisation. (Abstract)

Cutts, Elise. The Strange Physics That Gave Birth to AI. Quanta. April 30, 2025. A science writer tells the story of how John Hopfield, the 2024 Nobel laureate for his 1980s discovery of neural networks, came to this realization by way of condensed matter physics spin glass and Ising models. Co-recipient Geoffrey Hinton later joined in and teamed up in pursuit. A 2016 paper, Dense Associative Memory for Pattern Recognition, by Dmitry Krotov and John Hopfield (arXiv:1606.01164) is cited as an interim phase. See also The Computer Scientist Who Builds Big Pictures from Small Detail by John Pavlus in Quanta (October 7, 2024) for similar studies.

Other entries in this special Quanta Science, Promise and Peril in the Age of AI collection are What the Most Essential Terms in AI Really Mean by John Pavlus, Where Do Scientists Think This Is All Going? by Michael Mayer, Will AI Ever Understand Language Like Humans? with Janna Levin and Steven Strogatz and What Happens When AI Starts to Ask the Questions? by Gregory Barber.

It started as a fantasy, then a promise — inspired by biology and animated by the ideas of physicists — and grew to become a powerful research tool. Now artificial intelligence has evolved into something else: a junior colleague, a partner in creativity, an impressive if unreliable wish-granting genie. It has changed everything, from how we relate to data and truth, to how researchers devise experiments and mathematicians think about proofs. In this special series, we explore how AI is changing what it means to do science and math, and what it means to be a scientist. (Quanta intro.)

Czaplicka, Agnieszka, et al. Mutual benefits of social learning and algorithmic mediation for cumulative culture. arXiv:2410.00780. MPI Human Development and University of Pennsylvania computer scientists post an initial consideration of how AI machine learning codes in algorithmic equation form can facilitate the social collectivity that so distinguishes our Earthumanity.

The evolutionary success of humans is attributed to complex cultural artefacts that enable us to cope with environmental challenges. The evolution of complex culture is usually modeled as a collective process in which individuals invent new artefacts (innovation) and copy from others (social learning). However, in our present digital age, intelligent algorithms are often mediating information between humans. Building on cultural evolution models, we investigate network-based public learning and algorithmic mediation on cultural accumulation and find that this feature tends to be optimal when social education and algorithmic mediation are combined. (Excerpt)

Previous   1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10  Next  [More Pages]