(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

II. Pedia Sapiens: A Planetary Progeny Comes to Her/His Own Actual Factual Knowledge

1. Earthificial Cumulative Cognizance: AI Large Language Models Learn Much Like a Child

Chen, Boyuan, et al. Discovering State Variables Hidden In Experimental Data. arXiv:2112.10755. This entry by Columbia University computer scientists led by Hod Lipson offers a good survey of how this computational endeavor began and goes forth today. It opens by noting historic studies of physical laws and motions as a search for elusive values. From 2021, it is advised that as not before bovel AI methods can achieve deeper analyses so as to discern their presence in dynamic systems such as reaction-diffusion. See also Distilling Free-Form Natural Laws from Experimental Data by Michael Schmidt and Hod Lipson in Science (324/5923, 2009, second Abstract).

All Physical laws are based on relationships between state variables which give a description of the relevant system dynamics. However, the process of identifying the hidden state variables has so far resisted AI techniques. We propose a new principle to find how many state variables an observed system is likely to have, and what these variables might be. Without any prior knowledge of the underlying physics, our algorithm discovers the intrinsic dimension of the observed dynamics and identifies sets of state variables. We suggest that this approach could help catalyze the understanding, prediction and control of increasingly complex systems. (Excerpt)

For centuries, scientists have attempted to identify and document analytical laws that underlie physical phenomena. Despite much computing power, the process of finding natural laws and their equations has resisted automation. We need to define an algorithm which can insightfully correlate observed data sets. Without prior knowledge about physics, kinematics, or geometry, our algorithm discovered Hamiltonians, Lagrangians, and momentum conservation.. (2009 Abstract)

Ching, Travers, et al. Opportunities and Obstacles for Deep Learning in Biology and Medicine. Journal of the Royal Society Interface. Vol. 14/Iss. 141, 2018. Some 40 researchers from institutes, laboratories and hospitals in the USA, Canada and the UK well survey current applications, potentials, and problems for this cerebral-based AI revolution. Or as Siddhartha Muckerjee, MD wrote in the New Yorker last year: “the algorithm will see you now.”

Some 40 researchers from institutes, laboratories and hospitals in the USA, Canada and the UK well survey current applications, potentials, and problems for this cerebral-based AI revolution. Or as Siddhartha Muckerjee, MD wrote in the New Yorker last year: “the algorithm will see you now.”

Ciliberto, Carlo, et al. Quantum Machine Learning. Proceedings of the Royal Society A. 474/0551, 2017. University College London and MPI Intelligent Systems researchers provide a state of the science and art as the AI revolution by way of its novel biological, neural net basis becomes widely applicable. Here quantum phenomena, as it becomes affine with classical macro-modes, seems to bode for a cosmic connectome.

Collins, Katherine, et al. Building Machines that Learn and Think with People.. arXiv:2408.03943.. Thirteen concerned scholars at University, Princeton, NYU, Alan Turing Institute, MIT and Microsoft Research including Umang Bhatt, Mina Lee and Thomas Griffiths enter a latest proposal and plan toward a considerate, reciprocal assimilation of personal discourse with more amenable computational resources.

What do we want from machine intelligence? We envision machines that are not just tools for thought, but partners in thought: reasonable, insightful, knowledgeable, reliable, and trustworthy systems that think with us. In this Perspective, we show how the science of collaborative cognition can be put to work to engineer systems that really can be called “thought partners.'' Drawing on motifs from computational cognitive science, we motivate an alternative scaling path through a Bayesian lens, whereby the partners we actively build and reason over models of the human and world. (Excerpt)

Cranmer, Miles, et al. Discovering Symbolic Models from Deep Learning with Inductive Biases. arXiv;2006.11287. Seven Princeton U., Deep Mind, London, NYU, and Flatiron Institute, NYC computer specialists articulate yet another effective machine procedure as our learning (and hopefully thinking) planet begins to spiral up to a prodigious Earthropic sapiens phase.

We develop a general approach to distill symbolic representations of a learned deep model by introducing strong inductive biases. We focus on Graph Neural Networks (GNNs) that encourage sparse latent representations an apply symbolic regression to learned model components to extract physical relations. We go on to study a cosmology sample of detailed dark matter and are discover a analytic formula that can predict the concentration of dark matter from the mass distribution of nearby cosmic structures. Our approach offers new ways to interpret neural networks and revealing physical principles from their representations. (Abstract)

Cusack, Rhodri, et al. Helpless infants are learning a foundation model. Trends in Cognitive Sciences. 28/8, 2024. We refer to this contribution by Trinity College Dublin, Google DeepMind, London, and Auburn University neuropsychologists including Christine Charvet for latest views of the first three months neonatal to infant phase but also for its notice of a comparative affinity with how Artificial Intelligence language methods seem to be processed and learn. This section now contains several similar views which then provide an empirical basis for an actual pediakind sapience.

Humans have a protracted postnatal period, attributed to human-specific maternal constraints which cause an early birth when the brain is highly immature. By aligning neurodevelopmental events across species, however, it has been found that humans are not born with underdeveloped brains compared with animal species with a shorter helpless period. Consistent with this, the advancing field of infant neuroimaging has found that brain connectivity and functional activation at birth share many similarities with the mature brain. As a parallel approach, we consider deep neural network machine learning which also benefits from a ‘helpless period’ of pre-training. As a result, we propose that human infants are forming a foundational set of vital representations in preparation for later cognitive abilities with high performance and rapid generalisation. (Abstract)

Czaplicka, Agnieszka, et al. Mutual benefits of social learning and algorithmic mediation for cumulative culture. arXiv:2410.00780. MPI Human Development and University of Pennsylvania computer scientists post an initial consideration of how AI machine learning codes in algorithmic equation form can facilitate the social collectivity that so distinguishes our Earthumanity.

The evolutionary success of humans is attributed to complex cultural artefacts that enable us to cope with environmental challenges. The evolution of complex culture is usually modeled as a collective process in which individuals invent new artefacts (innovation) and copy from others (social learning). However, in our present digital age, intelligent algorithms are often mediating information between humans. Building on cultural evolution models, we investigate network-based public learning and algorithmic mediation on cultural accumulation and find that this feature tends to be optimal when social education and algorithmic mediation are combined. (Excerpt)

Das Sarma, Sankar, et al. Machine Learning Meets Quantum Physics. Physics Today. March, 2019. In a “most read” journal paper, University of Maryland and Tsinghua University, Beijing computational theorists show how these disparate fields actually have common qualities, which can serve to inform, meld and advance each endeavor. The graphic article goes on to compare affine neural network and quantum states so as to join “classical and quantum” phases, especially for communicative and computational purposes.

Dawid, Anna. Modern applications of machine learning in quantum sciences. arXiv:2204.04198. As this planetary spiral ascent and method proceeds, some thirty contributors across Europe and the USA put together a 288 page, 730 reference volume which covers this entire frontier field to date. In regard, the book is an awesome testimony to our Earthuman abilities to delve deep and scan afar so an ecosmic genesis uniVerse can be able to describe and learn all about itself.

In this book, we provide a comprehensive introduction to the most recent advances in the application of machine learning methods in quantum sciences. We cover the use of deep learning and kernel methods in supervised, unsupervised, and reinforcement learning algorithms for phase classification, representation of many-body quantum states, quantum feedback control, and quantum circuits optimization. Moreover, we introduce and discuss more specialized topics such as differentiable programming, generative models, statistical approach to machine learning, and quantum machine learning. (Abstract)

In the last decade, Machine Learning has been intensively studied and has revolutionized many topics, including computer vision and natural language processing. The new toolbox and set of ideas coming from this field have also found successful applications in the sciences. In particular, ML and DL have been used to tackle problems in the physical and chemical sciences, both in the classical and quantum regimes from particle physics, fluid dynamics, cosmology, many-body quantum systems to quantum computing and quantum information theory [697]. (234)

Hi! I’m a research fellow at the Center of Computational Quantum Physics of the Flatiron Institute in New York, happily playing with interpretable machine learning for science. I defended my joint Ph.D. degree in physics and photonics in September 2022 at the University of Warsaw and ICFO – The Institute of Photonic Sciences, Spain. Before that, I did my MSc in quantum chemistry and BSc in biotechnology at the University of Warsaw. (Anna Dawid)

De Marzo, Giordano. et al.. Emergence of ScaleFree Networks in Social Interactions among Large Language Models. arXiv:2312.06619.. arXiv:2312.06619.. Senior theorists at Centro Ricerche Enrico Fermi, Rome and Complexity Science Hub, Vienna (Luciano Pietronero, David Garcia) scope out how a working integration between these conceptual domains (ABMs and LLMs) might be achieved which are currently meshing with each other. In some way they have their own affinities and can result in novel features. This entry is also keyed with the special The Psychology of Collectives issues in Perspectives on Psychological Science for December 2023.

Scale-free networks are iconic examples of emergent behavior such as online social media in which users can follow each other. By analyzing the interactions of many generative agents using GPT3.5-turbo as a language model, we show their ability to not only mimic human linguistic behavior but with collective societal phenomena. We show how renaming agents allows the model to generate a range of realistic scale-free networks. (Excerpts)

Deng, Dong-Ling, et al. Quantum Entanglement in Neural Network States. Physical Review X. 7/021021, 2017. University of Maryland, and Fudan University, Shanghai, theorists identify, develop and extol the practical affinity of neural network cognitive geometries and operational workings with quantum phase phenomena. If to reflect, how might its application to this fundamental cosmic realm imply an intrinsic cerebral character and content? A step further, if global human beings can so readily plumb such depths, and intentionally apply these basic, principled methods, could a creative universe intend for their passage to our cognizance and continuance?

Machine learning, one of today’s most rapidly growing interdisciplinary fields, promises an unprecedented perspective for solving intricate quantum many-body problems. Understanding the physical aspects of the representative artificial neural-network states has recently become highly desirable in the applications of machine-learning techniques to quantum many-body physics. In this paper, we explore the data structures that encode the physical features in the network states by studying the quantum entanglement properties, with a focus on the restricted-Boltzmann-machine (RBM) architecture. Our results uncover the unparalleled power of artificial neural networks in representing quantum many-body states regardless of how much entanglement they possess, which paves a novel way to bridge computer-science-based machine-learning techniques to outstanding quantum condensed-matter physics problems. (Abstract excerpts)

DiPaolo, Laura, et al. Active inference goes to school: the importance of active learning in the age of large language models. Philosophical Transactions of the Royal Society B. August, 2024. In an article for a Minds in movement: embodied cognition in the age of artificial intelligence issue, this entry by University of Sussex cognitive scientists including Axel Constant and Andy Clark is noted for its meld of embodied thinking with free energies and also for a turn to educational approaches as an appropriate way to try to understand and manage these voluminous AI faculties. In specific regard, the widely used (Maria 1870-1952) Montessori method is extensively reviewed as especially suitable because of its intrinsic open creativity which engages and empowers children in group settings with hands-on activities. See also Differences in spatiotemporal brain network dynamics of Montessori and traditionally schooled students by Paola Zanchi, et al in npj Science of Learning (Vol. 9/Art. 45, 2024, herein).

Human learning often involves embodied interactions with the material world. But today this means an increasing amount of generative artificial intelligence content. Here we ask how to assimilate these resources into our educational practices. Our focus will be on approaches that foster exploration and interaction such as the carefully organized settings of Montessori methods. We surmise that generative AI should be a natural feature in these learning environs to facilitate sequences of prediction error and enabling trajectories of self-correction. (Excerpt)

Previous   1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10  Next