(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

II. Pedia Sapiens: A Planetary Progeny Comes to Her/His Own Actual Factual Knowledge

1. Earthificial Cumulative Cognizance: AI Large Language Models Learn Much Like a Child

Graupe, Daniel. Deep Learning Neural Networks. Singapore: World Scientific, 2016. . A senior University of Illinois electrical engineer provides a technical intro to their basic concepts, scope, methods, back-propagation, convolutional, recurrence aspects and much more, along with many case studies.

Deep Learning Neural Networks is the fastest growing field in machine learning. It serves as a powerful computational tool for solving prediction, decision, diagnosis, detection and decision problems based on a well-defined computational architecture. It has been successfully applied to a broad field of applications ranging from computer security, speech recognition, image and video recognition to industrial fault detection, medical diagnostics and finance. This work is intended for use as a one-semester graduate-level university text and as a textbook for research and development establishments in industry, medicine and financial research.

Hassabis, Demis, et al. Neuroscience-Inspired Artificial Intelligence. Neuron. 95/2, 2017. We note this entry because the lead author is the 2010 founder of DeepMind, a premier AI enterprise based in London, which was purchased in 2014 by Google for over $500 million. It is a broad survey of the past, present, and future of this brain-based endeavor guided by advances in how cerebral network dynamics are composed, think, and actively learn.

The successful transfer of insights gained from neuroscience to the development of AI algorithms is critically dependent on the interaction between researchers working in both these fields, with insights often developing through a continual handing back and forth of ideas between fields. In the future, we hope that greater collaboration between researchers in neuroscience and AI, and the identification of a common language between the two fields, will permit a virtuous circle whereby research is accelerated through shared theoretical insights and common empirical advances. We believe that the quest to develop AI will ultimately also lead to a better understanding of our own minds and thought processes. (Conclusion, 255)

Hayakawa, Takashi and Toshio Aoyagi. Learning in Neural Networks Based on a Generalized Fluctuation Theorem. arXiv:1504.03132. Reported more in Universality Affirmations, as the Abstract details, Kyoto University researchers join systems physics and neuroscience to reveal a persistence of universally recurring phenomena, under a rubric of fluctuation theories.

Higgins, Irina, et al. Symmetry-Based Representations for Artificial and Biological General Intelligence. arXiv:2203.09250. DeepMind, London really intelligent persons IH, Sebastien Racaniere and Danilo Rezende scope out ways that an intersect of computational frontiers with neuroscience studies can benefit each field going forward. Once again, an Earthificial realm becomes more brain-like as human beings may first program so that the algorithms can process their appointed tasks (if all goes to plan) and come up with vital contributions on their own.

Biological intelligence is remarkable in its ability to produce complex behaviour in diverse situations. An ability to learn sensory representations is a vital need, however there is little agreement as to what a good representation should look like. In this review we contend argue that symmetry transformations are a main principle. The idea these phenomena affect some aspects of a system but not others, has become central in modern physics. Recently, symmetries have gained prominence in machine learning (ML) by way of more data efficient and generic algorithms that mimic complex behaviors. Taken together, these symmetrical effects suggest a natural framework that determines the structure of the universe and consequently shapes both biological and artificial intelligences. (Abstract excerpt)

Kelleher, John. Deep Learning. Cambridge: MIT Press, 2019. John D. Kelleher is Academic Leader of the Information, Communication, and Entertainment Research Institute at the Technological University Dublin provides another up-to-date survey that is wide-ranging in scope along with in depth examples.

In the MIT Press Essential Knowledge series, computer scientist John Kelleher offers an accessible, concise and comprehensive introduction to the artificial intelligence revolution and its techniques. He explains, for example, how deep learning enables data-driven decisions by identifying and extracting patterns from large datasets, how it learns from large, complex data sets and much more. He describes important deep learning architectures such as autoencoders, recurrent neural networks, as well as such recent developments as Generative Adversarial Networks.

Kitano, Hiroaki. Nobel Turing Challenge: Creating the Engine for Scientific Discovery. NPJ Systems Biology. 7/29, 2021. A leading Japanese executive scientist who directs its Systems Biology Institute outlines a comprehensive, insightful project as it becomes more evident that AI computational algorithmic capacities, if properly informed and trained, can proceed to run programs, process data, iterate, and optimize research studies on their own. As since this frontier now involves many worldwise collaborations, as the spiral turns maybe a new collective group “Global Prize” in recognition would be appropriate. And this time it should include the missing life and mind sciences. See also herein entries by Charlie Wood as this Earthuman acumen gains momentum.

Scientific discovery has long been one of the central driving forces in our civilization. It uncovered the principles of the world we live in, and enabled us to invent new technologies reshaping our society, cure diseases, explore unknown new frontiers, and hopefully lead us to build a sustainable society. In this regard, we propose an overall “science of science” to guide and boost going forward. A prime facility into these 2020s thus need be a viable integration of artificial intelligence (AI) system. We are aware that the contributions of “AI Scientists” may not resemble human science but deep hybrid-AI methods could take us our cognitive limitations and sociological constraints. (Excerpt)

Knight, Will. The Dark Secret at the Heart of AI. MIT Technology Review. 120/3, 2017. A senior editor worries that No one knows how the most advanced algorithms do what they do. In this section, we also want to record such reality checks so that as this computational prowess burst upon us, it remains within human control and service, rather than taking over. See also herein Women vs. the Machine by Erika Hayasaki for another dilemma.

Kozma, Robert, et al. Artificial Intelligence in the Age of Neural Networks and Brain Computing. Cambridge, MA: Academic Press, 2018. A large international edition with 15 authoritative chapters from The New AI: Basic Concepts and Urgent Risks to Evolving Deep Neural Networks. See especially A Half Century of Progress toward a Unified Neural Theory of Mind and Brain by Stephen Grossberg (search).

Krenn, Mario, et al. On Scientific Understanding with Artificial Intelligence. arXiv:2204.01467. Twelve scholars in Germany, Canada, the USA, and China including Alan Aspuru-Guzik post a wide ranging survey as an early effort to try to understand, orient, enhance and benefit from this imminent worldwise, computational transition. But the historic occasion of some cerebral, machine neural deep learning, cognizance going on by itself is such a revolutionary presence with many issues and quandaries. The second quote might give some idea. Thus we repurpose, expand and rename an Earthificial Intelligence: Deep Neural Network Computation Planetary Science section. See also Powerful ”Machine Scientists” Distill the Laws of Physics from Raw Data by Charlie Wood in Quanta (May 10, 2022) for another array of novel paths.

Imagine an “oracle” that predicts the outcome of a particle physics experiment, the products of a chemical reaction, or the function of every protein. As scientists, we would not be satisfied, for we need to comprehend how these predictions were conceived. This feat of scientific understanding, has long been the essential aim of science. Now, the ever-growing power of computers and AI poses this question: But today we ask how can advanced computer systems contribute to learning and discovery. At this early phase we seek advice from the philosophy of science, review the state of the art, and ask current researchers about how they acquired novel findings this way. We hope our perspective inspires and focuses research towards devices and methods that foster and empower this worldwide facility. (Abstract excerpt, edit)

Three Dimensions of Computer-Assisted Understanding: We use scientific literature and personal anecdotes of many active users, and the philosophy of science, to introduce a new classification of android contributions to scientific understanding. Such entities can act I) as a computational microscope, providing information not (yet) attainable by experiment, II) as a resource of inspiration or artificial muse,. In those two classes, the human investigator is essential to develop new insights to their full potential Finally, an android can be III) an agent of understanding by generalizing observations and finding novel scientific concepts. (4)

Kriegeskorte, Nikolaus and Tai Golan. Neural Network Models and Deep Learning: A Primer for Biologists. arXiv:1902.04704. Columbia University neuroscientists provide a 14 page primer which would be a good entry for any field. Some sections are Neural nets are universal approximators, Deep networks can capture complex functions, and Deep learning by backpropagation.

Originally inspired by neurobiology, deep neural network models have become a powerful tool of machine learning and artificial intelligence, where they are used to approximate functions and dynamics by learning from examples. Here we give a brief introduction to neural network models and deep learning for biologists. We introduce feedforward and recurrent networks and explain the expressive power of this modeling framework and the backpropagation algorithm for setting the parameters. Finally, we consider how deep neural networks might help us understand the brain's computations. (Abstract)

Levi, DeHaan. LLM as Child Analogy.. levidehaan.com.. This is a posting by a veteran cyber designer whose group projects can be found on the above site. A longer title is Theorem: Evolutionary Pathway of LLMs under the "LLM as Child Analogy" which may be reached by Google keywords. We cite some excerpts which are similar to Marina Pantcheva’s views herein.

LLMs: A machine learning model designed to process and generate human-like text based on statistical patterns in data. Real-time Adaptability: The ability to modify its behavior based on new information. Memory Retrieval: A system to store, recall, and utilize past interactions. Decision-making Algorithm: A set of rules to make choices. Creative Reasoning: The capability to generate original content but not confined to it.

Children possess real-time adaptability, have a memory retrieval system. develop decision-making abilities and the capability for creative reasoning. If LLMs are to evolve along the lines of children, then the first logical step would be to implement real-time learning algorithms, moving from static to dynamic models. For LLMs to be more analogous to children, they would need the ability to generate new, original content, potentially through some form of creative reasoning. Achieving real-time adaptability would make LLMs dynamic learners, thereby aligning with human children.

Levine, Yoav, et al. Deep Learning and Quantum Entanglement: A Fundamental Bridge. arXiv:1704.01552. Along with other entries (Beny, Golkov), Hebrew University of Jerusalem including Amnon Shashua, plumb the physical depths of this natural, informative synthesis across the physical cosmos to its cerebral emergence. (Over this stretch, might we imagine ourselves as a genesis universe’s way of attaining its own self-cognizance, and continuance?) See also Quantum Entanglement in Neural Network States at arXiv:1701.04844.

Deep convolutional networks have witnessed unprecedented success in various machine learning applications. Formal understanding on what makes these networks so successful is gradually unfolding, but for the most part there are still significant mysteries to unravel. The inductive bias, which reflects prior knowledge embedded in the network architecture, is one of them. In this work, we establish a fundamental connection between the fields of quantum physics and deep learning. We use this connection for asserting novel theoretical observations regarding the role that the number of channels in each layer of the convolutional network fulfills in the overall inductive bias. Specifically, we show an equivalence between the function realized by a deep convolutional arithmetic circuit and a quantum many-body wave function, which relies on their common underlying tensorial structure. This facilitates the use of quantum entanglement measures as well-defined quantifiers of a deep network's expressive ability to model intricate correlation structures of its inputs. (Abstract excerpt)

Previous   1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10  Next