(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

II. Pedia Sapiens: A Planetary Progeny Comes to Her/His Own Actual Factual Twintelligent Knowledge

1. Earthificial Cumulative Cognizance: AI Large Language Models Learn Much Like a Child

This is a new section posted in 2017 to survey and report this “artificial intelligence” or AI turn from a machine computation scheme to new understandings that biological brains as they fluidly form, think, perceive, and gain knowledge can provide a much better model. This revolution parallels and is informed by neuroscience findings of cerebral node/link modularity, net communities, rich hubs, multiplex dynamics of neuron/synapse topologies and emergent cognizance. A prime quality is their self-organized, critically poised, self-corrective, iterative education, and especially their achievement of pattern recognition, as we people do so well. “Deep” means several interactive network layers or phases are in effect, rather than just one level or “shallow” AI.

Another consequence is an increasing use of “artificial” neural net (ANN) techniques to handle vast data inputs from worldwide astronomic, quantum, chemical, genetic and any other research realms. They are also aid studies of life’s organismic physiologies and evolutionary course, along with social media, behavioral, traffic, populace, and economic activities. Bayesian methods of sequentially optimizing probabilities for good enough answers are often used in concert. These citations survey this growing collaborative advance, see e.g. Quantum Codes from Neural Networks (Bausch). They also bode well for another window on the discovery of a natural universality (section IV. B) as brains, genomes, quantum phenomena, creatures, societies, literary corpora, and all else become treatable by the one, same exemplary “-omics” code.

And in accord with our website premise that an emergent cumulative transition is underway to a sapient personsphere, we take license to dub this movement as an Earthificial Intelligence. If scientists and scholars are presently applying neural architectures and capabilities to advance their local and global projects, these endeavors could appear as the evidential guise of a bicameral noosphere. Rather than an invasive technical artifice and/or singularity which might take off by itself, this prodigious progeny could be appreciated as learning and educating on her/his own.

Since c. 2015 a revision from ineffective machine methods to a biological, human brain based approach has gone foward. It is notable that “artificial” neural nets have found much analytic utility from quantum to galactic studies, which is indicative of nature’s common recurrence. The year 2023 has seen a rush of Large Language Model and ChatGPT applications with capabilities and concerns. In August I heard Eric Schmidt say on TV that a root issue is the absence of any philosophic guidance. In regard then, into early 2024 I have come across several entries (herein Michael Frank, Marina Pantcheva, Claire Stevenson, Levi DeHaan, Azzurra Ruggeri, Pierre Oudeyer) which perceive a general similarity between how the LLM content seems to form and operate and the inquisitive trials and triumphs by which children learn to speak and gain knowledge.

I next refer to our own 2020s venue as PediaPedia Earthica whose occasion is attributed to a major evolutionary transition in individuality (see V. A section) as life’s emergent scale ascends to a consummate personsphere stage. We have been using names such as planetary progeny, prodigy, EarthKinder for this novel presence and a global sapiensphere. To follow this theme, an actual identity can be proposed as this composite Earthwise intelligence (EI) educates her/his self.

ChatGPT is an artificial intelligence (AI) application that uses natural language processing to create human-like conversational dialogue. It can respond to questions and compose written content such as articles, social media, essays and emails. ChatGPT is a form of generative AI, a tool that lets users enter prompts to receive images, text or videos. GPT means "Generative Pre-trained Transformer," which refers to how it processes requests and formulates responses. It is trained with reinforcement learning through human feedback that rank the best responses.


Bahri, Yasaman, et al. Statistical Mechanics of Deep Learning. Annual Review of Condensed Matter Physics. 11/501, 2020.

Bausch, Johannes and Felix Leditsky. Quantum Codes from Neural Networks. New Journal of Physics. 22/023005, 2020.

Botvinick, Matthew. Realizing the Promise of AI: A New Challenge for Cognitive Science. Trends in Cognitive Sciences. 26/12, 2022.

Chantada, Augusto, et al. Cosmological Informed Neural Networks to Solve the Background Dynamics of the Universe. arXiv:2205.02945.

Hayasaki, Erika. Women vs. the Machine. Foreign Policy. Jan/Feb, 2017.

Krenn, Mario, et al. On Scientific Understanding with Artificial Intelligence. arXiv:2204.01467.

Manyika, James, ed. AI & Society. Daedulus. Spring 2022.

Mitchell, Melanie. What Does It Mean to Align AI with Human Values? Quanta. December 13, 2022.

Ohler, Simon, et al. Towards Learning Self-Organized Criticality of Rydberg Atoms using Graph Neural Networks. arXiv:2207.08927.

Russell, Stuart. Human Compatible: Artificial Intelligence and the Problem of Control. New York: Penguin, 2020.

Sejnowski, Terrence. The Deep Learning Revolution. Cambridge: MIT Press, 2018.

Spraque, Kyle, et al. Watch and Learn – A Generalized Approach for Transferrable Learning in Deep Neural Networks via Physical Principles. Machine Learning: Science and Technology. 2/2, 2021.

Tzachor, Asaf, et al. Artificial Intelligence in a Crisis Needs Ethics with Urgency. Nature Machine Intelligence. . 2/365, 2020.

Weng, Kangyu, et al. Statistical Physics of Deep Neural Networks. arXiv:2212.01744.

Wood, Charlie. How to Make the Universe Think for Us. Quanta. June 1, 2022.

2023:

Can We Build a Brain?. www.pbs.org/video/nova-wonders-can-we-build-a-brain-j53aqg. This April 2018 NOVA Wonders program provides a brilliant introductory survey of these active frontiers of Artificial Intelligence and Deep Neural Network Learning. An extraordinary array of contributors such as Fei Fei Li, Christoph Koch, Rodney Brooks, DeepMind experts, to cite a few, and especially Rana El Kaliouby, reveal a grand project with immense promise for peoples and planet if it can be respectfully guided and carried forth.

Information-Theoretic Approaches in Deep Learning. www.mdpi.com/journal/entropy/special_issues/deep_learning. This page is an announcement about a special issue planned for the popular online MDPI Entropy site, which is open for manuscripts until December 2018. It is conceived and edited by Deniz Gencaga, a Antalya Bilim University, Turkey, professor of electrical engineering.

Deep Learning (DL) has revolutionized machine learning especially in the last decade. As a benefit of this unprecedented development, we are capable of working with very large Neural Networks (NNs), composed of multiple layers (Deep Neural Networks), in many applications, such as object recognition-detection, speech recognition and natural language processing. Although many Convolutive Neural Network (CNN) and Recurrent Neural Network (RNN) based algorithms have been proposed, a comprehensive theoretical understanding of DNNs remains to be a major research area. Recently, we have seen an increase in the number of approaches that are based on information-theoretic concepts, such as Mutual Information. In this Special Issue, we would like to collect papers focusing on both the theory and applications of information-theoretic approaches for Deep Learning. The application areas are diverse and some of them include object tracking/detection, speech recognition, natural language processing, neuroscience, bioinformatics, engineering, finance, astronomy, and Earth and space sciences.

Is AI Extending the Mind?. www.crosslabs.org/workshop-2022. A virtual workshop held on April 11 – 15, 2022 with video presentations such as On AI & Ecosystems by Alan Dorin, On Enactive AI by Tom Froese & Dobromir Dotov, and On Autonomous Agents and Semantic Information by Artemy Kolchinsky.

Pierre-Yves Oudeyer. www.pyoudeyer.com.. . The French computational psychologist (search) is the director of the Flowers project-team at the Inria Center of University of Bordeaux. Current (March 2024) projects are now much involved with chatty AI features guided by insights gained from studies with children. A recent talk is Developmental AI: machines that learn like children and help children learn better. As the quotes say, another senior scholar finds evidence that both youngsters and large language modes use trail/error iterate methods in similar ways. See also Open-ended learning and development in machines and humans on the flowers.inria.fr. site.


Together with a great team, I study lifelong autonomous learning, and the self-organization of behavioural, cognitive and language structures at the frontiers of artificial intelligence and cognitive sciences. I use machines as tools to understand better how children learn and develop, and I study how one can build machines that learn autonomously like children, as well as integrate within human cultures, within the new field of developmental artificial intelligence. (P-Y O)

The Flowers project-team, at the University of Bordeaux and at Ensta ParisTech, studies versions of holistic individual development. These models can help us better understand how children learn, as well as to build machines that gain knowledge as children do, aka developmental artificial intelligence, with applications in educational technologies, automated discovery, robotics and human-computer interaction.

Power and Limits of Artificial Intelligence. www.pas.va/content/accademia/en/publications/scriptavaria/artificial_intelligence. A site for the Proceedings of a Pontifical Academy of Sciences workshop held in late 2016 on this advance and concern. A premier array of neuroscience and computer scientists such as Stanislas Dehaene, Wolf Singer, Yann LeCun, Patricia Churchland, Demis Hassabis, and Elizabeth Spelke spoke, whose presentations both in video and text are available on this site. Search also Dehaene 2017 for a major paper in Science (358/486) as a follow up on his talk and this event.

Ananthaswamy, Anil. Why Machines Learn: The Elegant Math Behind Modern AI. New York: Dutton, 2024. The veteran science expositor provides a comprehensive discourse from its historic origin such as Gottfied Leibniz’s alphabets, algebra and calculus onto late 20th century algorithmic computations and just now the ChatGPT neural net, large information phase.

This work provides a grand narrative of the deep mathematics that underlie and drive machine learning and AI advances. Machine learning now influences developments and discoveries in chemistry, biology, and physics—the study of genomes, extra-solar planets, even quantum systems. As Ananthaswamy concludes, to make safe and effective use of artificial intelligence, we need to understand the capabilities and limitations which lie in the math that makes them possible

Ouyang, Ouyang, Siru, et al.. Structured Chemistry Reasoning with Large Language Models.. arXiv:2311.09656. We cite this paper by University of Illinois, Shanghai Jiao Tong University, NYU and UC San Diego computational chemists as an instance of a workable advance for this basic science by way of 2023 chatbot and large language content facilities.

This paper studies how to solve complex chemistry problems with large language models (LLMs). Despite their extensive general knowledge they struggle wit han integrative understanding of chemical reactions. We propose InstructChem, a new structured approach that boosts the LLMs' capabilities. InstructChem involves three phrases including chemical formulae generation that offers the basis for step-by-step reasoning for a preliminary answer, and iterative review-and-refinement that steers LLMs to progressively revise the previous phases for increasing confidence, leading to the final high-confidence answer. (Excerpt)

Aggarwal, Charu. Neural Networks and Deep Learning. International: Springer, 2018. The IBM Watson Center senior research member provides a latest copious textbook for this active revolution. Ten chapters go from the AI machine advance to brain and behavior based methods, onto features such as training, regularization, linear/logistic regression, matrix factorization, along with neural Turing machines, Kohonen self-organizing maps, recurrent and convolutional nets.

Alexander, Victoria, et al. Living Systems are Smarter Bots: Slime Mold Semiosis versus AI Symbol Manipulation. Biosystems. August, 2021. Within generous biosemiotic literacies that perceive living nature as graced with an incarnate intelligence and narrative scriptome, ITMO University, Russia, Dactyl Foundation, NYC and Autonomous University of Barcelona scholars describe how simple microbial realms (in body, not mind) can express cognitive abilities far beyond lumpen machines.

Alser, Mohammed, et al. Going from Molecules to Genomic Variations to Scientific Discovery. arXiv:2205.07957. We cite this entry by an eight person ETH Zurich team to record a dedicated project to access the latest deep learning techniques so as to achieve a realm of Iintelligent Algorithms and Architectures (hardware) for next generation sequencing needs.

A great need now exists to intelligently read, analyze, and interpret our genomes not more quickly, but accurately and efficiently enough to scale to population levels. Here we describe much improved genome studies by way of novel AI algorithms and architectures. Algorithms can access genomic structures as well as the underlying hardware. We move onto future challenges, benefits, and research directions opened by new sequencing technologies and specialized hardware chips. (Excerpt)

Anshu, Anurag, et al. Sample-efficient Learning of Interacting Quantum Systems.. Nature Physics. 17/8, 2021. We cite this entry by UC Berkeley, IBM Watson Research, RIKEN Center, Tokyo, and MIT researchers as an example of how AI studies are becoming amenable even to this deepest, foundational realm. Once again a grand ecosmic endeavor seems to be its own internal self-description, so that maybe whomever sapiensphere is able to do this can begin a new intentional creation from here.

Learning the Hamiltonian that describes interactions in both condensed-matter physics and the verification of quantum technologies is an important task. Previously, the best methods for quantum Hamiltonian learning with able performance required measurements that scaled exponentially with the number of particles. Here we prove that only a polynomial number of local measurements on the thermal state of a quantum system are necessary for accurately learning its Hamiltonian. The framework we introduce provides a theoretical foundation for applying machine learning techniques to achieve a long-sought goal in quantum statistical learning. (Abstract excerpt)

Hamiltonian function, also called Hamiltonian, is a mathematical definition introduced in 1835 by Sir William Rowan Hamilton to express the rate of change the condition of a dynamic physical system, such as a set of moving particles.

Aragon-Calvo, MIguel. Classifying the Large Scale Structure of the Universe with Deep Neural Networks. arXiv:1804.00816. We cite this posting by a National Autonomous University of Mexico astronomer as an example of how such novel brain-based methods are being applied to even quantify these celestial reaches. By this work and many similar entries, might our Earthwise sapiensphere be perceived as collectively beginning to quantify the whole multiverse? Could it also allude a sense of an affine nature as a cerebral, connectome cosmos? See also, e.g., An Algorithm for the Rotation Count of Pulsars at 1802.0721.

1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10  Next  [More Pages]