(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

II. Pedia Sapiens: A Planetary Progeny Comes to Her/His Own Actual Factual Knowledge

1. Earthificial Cumulative Cognizance: AI Large Language Models Learn Much Like a Child

Knight, Will. The Dark Secret at the Heart of AI. MIT Technology Review. 120/3, 2017. A senior editor worries that No one knows how the most advanced algorithms do what they do. In this section, we also want to record such reality checks so that as this computational prowess burst upon us, it remains within human control and service, rather than taking over. See also herein Women vs. the Machine by Erika Hayasaki for another dilemma.

Kozma, Robert, et al. Artificial Intelligence in the Age of Neural Networks and Brain Computing. Cambridge, MA: Academic Press, 2018. A large international edition with 15 authoritative chapters from The New AI: Basic Concepts and Urgent Risks to Evolving Deep Neural Networks. See especially A Half Century of Progress toward a Unified Neural Theory of Mind and Brain by Stephen Grossberg (search).

Krenn, Mario, et al. On Scientific Understanding with Artificial Intelligence. arXiv:2204.01467. Twelve scholars in Germany, Canada, the USA, and China including Alan Aspuru-Guzik post a wide ranging survey as an early effort to try to understand, orient, enhance and benefit from this imminent worldwise, computational transition. But the historic occasion of some cerebral, machine neural deep learning, cognizance going on by itself is such a revolutionary presence with many issues and quandaries. The second quote might give some idea. Thus we repurpose, expand and rename an Earthificial Intelligence: Deep Neural Network Computation Planetary Science section. See also Powerful ”Machine Scientists” Distill the Laws of Physics from Raw Data by Charlie Wood in Quanta (May 10, 2022) for another array of novel paths.

Imagine an “oracle” that predicts the outcome of a particle physics experiment, the products of a chemical reaction, or the function of every protein. As scientists, we would not be satisfied, for we need to comprehend how these predictions were conceived. This feat of scientific understanding, has long been the essential aim of science. Now, the ever-growing power of computers and AI poses this question: But today we ask how can advanced computer systems contribute to learning and discovery. At this early phase we seek advice from the philosophy of science, review the state of the art, and ask current researchers about how they acquired novel findings this way. We hope our perspective inspires and focuses research towards devices and methods that foster and empower this worldwide facility. (Abstract excerpt, edit)

Three Dimensions of Computer-Assisted Understanding: We use scientific literature and personal anecdotes of many active users, and the philosophy of science, to introduce a new classification of android contributions to scientific understanding. Such entities can act I) as a computational microscope, providing information not (yet) attainable by experiment, II) as a resource of inspiration or artificial muse,. In those two classes, the human investigator is essential to develop new insights to their full potential Finally, an android can be III) an agent of understanding by generalizing observations and finding novel scientific concepts. (4)

Kriegeskorte, Nikolaus and Tai Golan. Neural Network Models and Deep Learning: A Primer for Biologists. arXiv:1902.04704. Columbia University neuroscientists provide a 14 page primer which would be a good entry for any field. Some sections are Neural nets are universal approximators, Deep networks can capture complex functions, and Deep learning by backpropagation.

Originally inspired by neurobiology, deep neural network models have become a powerful tool of machine learning and artificial intelligence, where they are used to approximate functions and dynamics by learning from examples. Here we give a brief introduction to neural network models and deep learning for biologists. We introduce feedforward and recurrent networks and explain the expressive power of this modeling framework and the backpropagation algorithm for setting the parameters. Finally, we consider how deep neural networks might help us understand the brain's computations. (Abstract)

Levi, DeHaan. LLM as Child Analogy.. levidehaan.com.. This is a posting by a veteran cyber designer whose group projects can be found on the above site. A longer title is Theorem: Evolutionary Pathway of LLMs under the "LLM as Child Analogy" which may be reached by Google keywords. We cite some excerpts which are similar to Marina Pantcheva’s views herein.

LLMs: A machine learning model designed to process and generate human-like text based on statistical patterns in data. Real-time Adaptability: The ability to modify its behavior based on new information. Memory Retrieval: A system to store, recall, and utilize past interactions. Decision-making Algorithm: A set of rules to make choices. Creative Reasoning: The capability to generate original content but not confined to it.

Children possess real-time adaptability, have a memory retrieval system. develop decision-making abilities and the capability for creative reasoning. If LLMs are to evolve along the lines of children, then the first logical step would be to implement real-time learning algorithms, moving from static to dynamic models. For LLMs to be more analogous to children, they would need the ability to generate new, original content, potentially through some form of creative reasoning. Achieving real-time adaptability would make LLMs dynamic learners, thereby aligning with human children.

Levine, Yoav, et al. Deep Learning and Quantum Entanglement: A Fundamental Bridge. arXiv:1704.01552. Along with other entries (Beny, Golkov), Hebrew University of Jerusalem including Amnon Shashua, plumb the physical depths of this natural, informative synthesis across the physical cosmos to its cerebral emergence. (Over this stretch, might we imagine ourselves as a genesis universe’s way of attaining its own self-cognizance, and continuance?) See also Quantum Entanglement in Neural Network States at arXiv:1701.04844.

Deep convolutional networks have witnessed unprecedented success in various machine learning applications. Formal understanding on what makes these networks so successful is gradually unfolding, but for the most part there are still significant mysteries to unravel. The inductive bias, which reflects prior knowledge embedded in the network architecture, is one of them. In this work, we establish a fundamental connection between the fields of quantum physics and deep learning. We use this connection for asserting novel theoretical observations regarding the role that the number of channels in each layer of the convolutional network fulfills in the overall inductive bias. Specifically, we show an equivalence between the function realized by a deep convolutional arithmetic circuit and a quantum many-body wave function, which relies on their common underlying tensorial structure. This facilitates the use of quantum entanglement measures as well-defined quantifiers of a deep network's expressive ability to model intricate correlation structures of its inputs. (Abstract excerpt)

Li, Qing, et al. Progress and Opportunities of Foundation Models in Bioinformatics. arXiv:2402.04286. Chinese University of Hong Kong and BioMap, Beijing computer scientists provide a wide-ranging perspective on this mid 2020s synthesis of a Bioinformatic approach, whose journal goes back to 1985, and these novel AI neural net, large language models as they become amenable.

Bioinformatics has witnessed a paradigm shift with the increasing integration of artificial intelligence (AI) and the adoption of foundation models (FMs). These AI techniques have addressed prior issues in bioinformatics such as scarce annotations and of data noise. FMs are adept at handling large-scale, unlabeled data, which has allowed them to achieve notable results in downstream validation tasks. The primary goal of this survey is to conduct a systematic investigation and summary of FMs in bioinformatics, tracing their evolution, current research status, and the methodologies employed. Finally, we outline potential development paths and strategies for FMs in future biological research. (Excerpt)

Lin, Henry and Max Tegmark. Why does Deep and Cheap Learning Work so Well?. arXiv:1608.08225. The Harvard and MIT polymaths review the recent successes of these neural net, multiscale, algorithmic operations (definitions vary) from a statistical physics context such as renormalization groups and symmetric topologies. (Intelligent Evolution)

Liu, Weibo, et al. A Survey of Deep Neural Network Architectures and their Applications. Neurocomputing. 234/11, 2017. As the Abstract cites, Brunel University, London, Xiamen University, Yangzhou University, and King Abdulaziz University, Jeddah, computer engineers provide a wide-ranging tutorial on these increasingly useful cognitive methods.

Since the proposal of a fast learning algorithm for deep belief networks in 2006, the deep learning techniques have drawn ever-increasing research interests because of their inherent capability of overcoming the drawback of traditional algorithms dependent on hand-designed features. Deep learning approaches have also been found to be suitable for big data analysis with successful applications to computer vision, pattern recognition, speech recognition, natural language processing, and recommendation systems. In this paper, we discuss some widely-used deep learning architectures and their practical applications. An up-to-date overview is provided on four deep learning architectures, namely, autoencoder, convolutional neural network, deep belief network, and restricted Boltzmann machine. Different types of deep neural networks are surveyed and recent progresses are summarized. (Abstract)

Lucie-Smith, Luisa, et al. Machine Learning Cosmological Structure Formation. arXiv:1802.04271. We cite this entry by University College London astrophysicists including Hiranya Peiris as an example of the widest range that a new cerebral-based artificial intelligence methods can be applied. If to reflect, whom is this person/sapiensphere prodigy to so proceed as the universe’s way of achieving its own self-quantified description?

Maheswaranathan, Niru, et al. Universality and Individuality in Neural Dynamics across Large Populations of Recurrent Networks. arXiv:1907.08549. By virtue of the latest sophistications, Google Brain and Stanford University AI researchers are able to discern and report “representational similarities” between “biological and artificial networks.” These qualities are then seen in effect across an array of personal and communal affinities.

Manyika, James, ed. AI & Society. Daedulus. Spring, 2022. A timely, dedicated survey with entries like If We Succeed by Stuart Russell, A Golden Decade of Deep Learning by Jeffrey Dean, Language & Coding Creativity by Ermira Murati, and Signs Taken for Wonders: AI. Art & the Matter of Race by Michele Elam.

AI is transforming our relationships with technology and with others, our senses of self, as well as our approaches to health care, banking, democracy, and the courts. But while AI in its many forms has become ubiquitous and its benefits to society and the individual have grown, its impacts are varied. Concerns about its unintended effects and misuses have become paramount in conversations about the successful integration of AI in society. This volume explores the many facets of artificial intelligence: its technology, its potential futures, its effects on labor and the economy, its relationship with inequalities, its role in law and governance, its challenges to national security, and what it says about us as humans. (Issue review)

Previous   1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9  Next