(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

II. Pedia Sapiens: A Planetary Progeny Comes to Her/His Own Actual Factual Knowledge

1. Earthificial Cumulative Cognizance: AI Large Language Models Learn Much Like a Child

Tibbetts, John. The Frontiers of Artificial Intelligence. BioScience. 68/1, 2018. A science writer provides a good survey of how deep learning AI capabilities are lately being availed to much benefit worldwide in agricultural crop surveys, medical diagnostic image analysis, flora and fauna conservation, and more. Of course we need be wary and careful, but ought to appreciate its many advantages.

Tzachor, Asaf, et al. Artificial Intelligence in a Crisis Needs Ethics with Urgency. Nature Machine Intelligence. 2/365, 2020. Cambridge University, Center for the Study of Existential Risk and Center for the Future of Intelligence scholars weigh in by saying that while the COVID pandemic can be well studied by AI to better analyze spreadings, track movements and so on, its use needs to be scrutinized and guided by respectful methods.

Artificial intelligence tools can help save lives in a pandemic. However, the need to implement technological solutions rapidly leads to challenging ethical issues. We need new approaches for ethics with urgency, to ensure AI can be safely and beneficially used in the COVID-19 response and beyond. (Abstract)

Vaidya, Satyarth, et al. Brief Review of Computational Intelligence Algorithms. arXiv:1901.00983. Birla Institute of Technology and Science, Pilani Campus, Dubai computer scientists survey a wide array of brain-based and indigenous algorithmic methods, along with showing how they are finding service in far afield domains from geology to cerebral phenomena.

Computational Intelligence algorithms have been found to deliver near optimal solutions. In this paper we propose a new hierarchy which classifies algorithms based on their sources of inspiration. The algorithms have two broad domains namely modeling of human mind and nature inspired intelligence. Algorithms of Modeling of human mind take their motivation from the manner in which humans perceive and deal with information. Similarly algorithms of nature inspired intelligence are based on ordinary phenomenon occurring in nature. The latter has further been broken into swarm intelligence, geosciences and artificial immune system. (Abstract)

VanRullen, Rufin and Ryota Kanai. Deep Learning and the Global Workspace Theory. Trends in Neuroscience. June, 2021. CNRS France and University of Toulouse neuroscholars propose to avail this vital cognitive feature from our cerebral facility to achieve a more brain based AI. In a general sense, as below, it is a space place where an active array of data, thoughts, facts and so on are gathered so to peruse and consider.

Recent advances in deep learning have allowed artificial intelligence (AI) to reach near human-level performance in many sensory, perceptual, linguistic, and cognitive tasks. There is a growing need, however, for novel, brain-inspired cognitive architectures. The Global Workspace Theory (GWT) refers to a large-scale system which integrates and distributes information among networks of specialized modules to create higher-level forms of cognition and awareness. Accordingly, we propose that implementations of this theory ought to be availed for AI using deep-learning techniques. (Abstract excerpt)

Global workspace theory (GWT) is a cognitive architecture that is meant to account qualitatively for a large set of matched pairs of conscious and unconscious processes. GWT resembles the concept of working memory, and corresponds to the inner domain of inner speech and visual imagery in which we carry on the narrative of our lives. (Wikipedia)

Wang, Yu, et al.. Can AI Understand Our Universe? Test of Fine-Tuning GPT by Astrophysical Data. arXiv:2404.10019. Eight computational astroscientists posted Italy, Armenia, China and Iran evaluate and compare guided applications of these current AI frontiers to data-intensive celestial studies. In their careful procedure they do find a valuable enhancement to an extent to define a new spiral phase of planetary human-computer teamwork.

As astro-researchers, we are curious about if scientific data can be correctly analyzed by large language models (LLMs) and yield accurate physics. In this article, we fine-tune the generative pre-trained transformer (GPT) model from the observations of galaxies, quasars, stars, gamma-ray bursts (GRBs), and the black holes (BHs), whereby the our model is able to classify astrophysical phenomena, distinguish between two types of GRBs, deduce the redshift of quasars, and estimate BH parameters. With the volume of multidisciplinary data and the advancement of AI technology, we look forward to a deeper comprehensive understanding of our universe. This article also shares some interesting thoughts on data collection and AI design. Using the approach of understanding the universe as a guideline, we propose a method of series expansion for AI, suggesting ways to train and control AI that is smarter than humans. (Abstract)

Wason, Ritika. Deep Learning: Evolution and Expansion. Cognitive Systems Research. 52701, 2018. A Bharati Vidyapeeth’s Institute of Computer Applications and Management, New Delhi professor of computer science provides a wide-ranging survey of this neural net based method since the 1980s by way of citing over 50 worldwide approaches to this day.

Wood, Charlie. How to Make the Universe Think for Us.. Quanta. June 1, 2022. The insightful science writer follows up his Powerful “Machine Scientists” Distill the Laws of Physics article (May 10, search) by noting more ways that this Earthificial phase is tailoring deep neural network algorithms so they can seek out, process data, reiterate and come up with findings. A lead citation is Deep Physical Neural Networks with Backpropagation by Logan Wright, et al (MIT, Cornell) in Nature (601/550, 2022), followed by Ben Scellier (Zurich) whose collegial entry is Agnostic Physics-Driven Deep Learning at arXiv:2205.15021 (Abstracts below). Contributions by Julie Grollier, Florian Marquardt, Sam Dillavou (Decentralized, Physics-Driven Learning at 2108.00275) and others are noted as this Earth-Human-Earth mission of universal edification goes forward.

Physicists are building neural networks out of vibrations, voltages and lasers, arguing that the future of computing lies in exploiting the universe’s complex physical behaviors. (CW)

Our approach allows us to train deep physical neural networks made from controllable physical systems, even when the layers lack a mathematical isomorphism to conventional artificial neural networks. To show the universality of our approach, we train diverse physical neural networks based on optics, mechanics and electronics to perform audio and image classification tasks. Physics-aware training combines the scalability of backpropagation with the automatic mitigation of imperfections and noise achievable with in situ algorithms. (Wright)

This work establishes that a physical system can perform statistical learnin via an Agnostic Equilibrium Propagation procedure that combines energy minimization, homeostatic control, and nudging towards a correct response. The procedure is based only on external manipulations, and produces a stochastic gradient descent without explicit gradient computations. This method considerably widens the range of potential hardware for statistical learning to any system with enough controllable parameters. (Scellier)

Wood, Charlie. Powerful ”Machine Scientists” Distill the Laws of Physics from Raw Data. Quanta. May 10, 2022. A science writer deftly realizes that a historic shift is underway as current AI methods begin to empower a revolutionary advance in how research studies can be carried out. The report goes on to survey diverse instances with an exemplary focus on the work of Spanish cell biologists Marta Sales-Pardo and Roger Guimera. The entry brings together so many uses that it has led me to rename and expand this Science Spiral section. In regard, we will be posting a plethora of papers from cosmological to quantum domains, and all in between. Wood’s entry begins with late 20th century starts so on to an array of algorithms by which can handle the vast data inputs that now flood in.

For instance, NYU climate physicist Laure Zanna models ocean turbulence, and Flatiron Institute and University of Washington programmers are busy ginning up equations. At Columbia U., Hod Lipson’s group finds agile software for better neural net performance, while Max Tegmark at MIT adds other versions (search both herein). For Sales-Pardo and Guimera’s cellular avail see Regulation of Cell Cycle Progression by Marina Utoz, et al in Nature Cell Biology (20/646, 2018) and A Bayesian Machine Scientist to Aid in the Solution of Challenging Scientific Problems by R. Guimera, et al in Scientific Advances (6/5, 202).

Closed-form, interpretable mathematical models have been instrumental for advancing our understanding of the world. With the data revolution, we may now be in a position to uncover new models for many systems from physics to the social sciences. However, to deal with increasing amounts of data, we need “machine scientists” that are able to extract these models automatically from data. Here, we introduce a Bayesian machine scientist, which considers the plausibility of models by iteratively learning from a large empirical corpus of mathematical expressions. We show that this approach uncovers accurate models for synthetic and real data and provides predictions that are more accurate than other nonparametric methods. (Excerpt)

Yound, Tom, et al. Recent Trends in Deep Learning Based Natural language Processing. IEEE Computational Intelligence Magazine. 13/3, 2918. Beijing Institute of Technology and Nanyang Technological University, Singapore computer scientists present a review tutorial about the state of the fruitful avail of neural net methods to parse linguistic textual writings. An affinity between recurrent networks and recursive script and also speech appears to be innately evident. Another commonality is that both cerebral and corpora modes are involved with prior memories of information and knowledge. See also Identifying DNA Methylation Modules Associated with Cancer by Probabilistic Evolutionary Learning in this issue, and earlier A Primer on Neural Network Models for Natural Language Processing by Yoav Goldberg, Yoav in the Journal of Artificial Intelligence Research (57/345, 2016).

Deep learning methods employ multiple processing layers to learn hierarchical representations of data, and have produced state-of-the-art results in many domains. Recently, a variety of model designs and methods have blossomed in the context of natural language processing (NLP). In this paper, we review significant deep learning related models and methods that have been employed for numerous NLP tasks and provide a walk-through of their evolution. We also summarize, compare and contrast the various models and put forward a detailed understanding of the past, present and future of deep learning in NLP. (Abstract)

Yue, Tianwei and Haohan Wang. Deep Learning for Genomics: A Concise Overview. arXiv:1802.00810. Xi’an Jiaotong University and Carnegie Mellon University scientists post an invited chapter for the 2018 Springer edition Handbook of Deep Learning Applications. We record as an example of how so many natural and social realms are now being treated by way of an organic neural network-like cognitive process.

Advancements in genomic research such as high-throughput sequencing techniques have driven modern genomic studies into "big data" disciplines. This data explosion is constantly challenging conventional methods used in genomics. In parallel with the urgent demand for robust algorithms, deep learning has succeeded in a variety of fields such as vision, speech, and text processing. Yet genomics entails unique challenges since we are expecting from deep learning a superhuman intelligence that explores beyond our knowledge to interpret the genome. In this paper, we briefly discuss the strengths of different models from a genomic perspective so as to fit each particular task with a proper deep architecture. (Abstract excerpts)

Zhu, Jun, et al. Big Learning with Bayesian Methods. National Science Review. Online May, 2017. In this Oxford Academic journal of technical advances from China, as their AI programs intensify State Key Lab for Intelligent Technology and Systems, Tsinghua University, computer scientists consider this iterative method of winnowing uncertainties and probabilities, often with massive data input, so as to reach sufficiently credible answers unto knowledge.

The explosive growth in data volume and the availability of cheap computing resources have sparked increasing interest in Big learning, an emerging subfield that studies scalable machine learning algorithms, systems and applications with Big Data. Bayesian methods represent one important class of statistical methods for machine learning, with substantial recent developments on adaptive, flexible and scalable Bayesian learning. This article provides a survey of the recent advances in Big learning with Bayesian methods, termed Big Bayesian Learning, including non-parametric Bayesian methods for adaptively inferring model complexity, regularized Bayesian inference for improving the flexibility via posterior regularization, and scalable algorithms and systems based on stochastic subsampling and distributed computing for dealing with large-scale applications. We also provide various new perspectives on the large-scale Bayesian modeling and inference. (Abstract)

Previous   1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9