(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

II. Pedia Sapiens: A Planetary Progeny Comes to Her/His Own Actual Factual Knowledge

1. Earthificial Cumulative Cognizance: AI Large Language Models Learn Much Like a Child

Dror1, Iddo. The Science of Deep Learning.. Cambridge: Cambridge University Press,, 2022. The volume seems to be a current survey which covers a wide range of aspects.

The book begins by covering the foundations of deep learning, followed by key learning architectures. It includes topics such as Transformers, graph neural networks, variational autoencoders. deep reinforcement learning, with a broad range of applications. An accompanying website provides complementary code and hundreds of exercises with solutions.

Iddo Drori is a faculty in Computer Science at Boston University, and adjunct Professor at Columbia University. He was a lecturer at MIT EECS, and at Cornell University in operations research and information engineering.

Dufourq, Emmanuel and Bruce Bassett. EDEN: Evolutionary Deep Networks for Efficient Machine Learning. arXiv:1709.09161. University of Cape Town, African Institute for Mathematical Sciences, theorists add a temporal depth to neural net computations by informing and integrating them with evolutionary phenomena. By turns, life’s quickening emergence might be likened to a grand educative endeavor. See also Evolving Deep Neural Networks at 1703.00548 for a companion effort.

Deep neural networks continue to show improved performance with increasing depth, an encouraging trend that implies an explosion in the possible permutations of network architectures and hyperparameters for which there is little intuitive guidance. To address this increasing complexity, we propose Evolutionary DEep Networks (EDEN), a computationally efficient neuro-evolutionary algorithm which interfaces to any deep neural network platform, such as TensorFlow. Evaluation of EDEN across seven image and sentiment classification datasets shows that it reliably finds good networks -- and in three cases achieves state-of-the-art results -- even on a single GPU, in just 6-24 hours. Our study provides a first attempt at applying neuro-evolution to the creation of 1D convolutional networks for sentiment analysis including the optimisation of the embedding layer. (Abstract)

Dunjko, Vedran and Hans Briegel. Machine Learning & Artificial Intelligence in the Quantum Domain. Reports on Progress in Physics. 81/7, 2018. University of Innsbruck physicists scope out how these dual approaches may cross-inform and merge so as to be fruitfully applied to this deepest frontier. See also, for example, Quantum Neural Network States by Zhih-Ahn Jia, et al atarXiv:1808.10601.

Quantum information technologies and intelligent learning systems are both emergent technologies that are likely to have a transformative impact on our society. In a growing body of recent work, researchers have been probing the question of the extent to which these fields can indeed learn and benefit from each other. Quantum ML explores the interaction between quantum computing and ML, investigating how results and techniques from one field can be used to solve the problems of the other. Beyond the topics of mutual enhancement—exploring what ML/AI can do for quantum physics and vice versa—researchers have also broached the fundamental issue of quantum generalizations of learning and AI concepts. This deals with questions of the very meaning of learning and intelligence in a world that is fully described by quantum mechanics. (Abstract excerpts)

Farisco, Michele, et al. Is artificial consciousness achievable? Lessons from the human brain. arXiv:2405.04540. We cite these neuroscience considerations of a potential AI sentience by MF, Uppsala University, Kathinka Evers, Biology and Molecular Genetics Institute, Italy and Jean-Pierre Changeux, Institut Pasteur, Paris for themselves and because the third author is a renowned octogenarian authority (search).

We consider the question of developing artificial consciousness from an evolutionary perspective, taking the sentient human brain as a reference. Several structural and functional features that appear to reach human-like complex awareness are identified which AI research needs to take into account. Even if AI is limited in its ability to emulate human consciousness for both intrinsic (structural and architectural) and extrinsic (scientific and technological knowledge) reasons, taking inspiration from cerebral attributes is a strategy towards perceptive AI. Therefore, we recommend neuroscience-inspired caution in talking about artificial consciousness. In regard, we propose to specify what is common and what differs in AI conscious processing from our full human experience. (Abstract)

Frank, Michael. Baby steps in evaluating the capacities of large language models.. Nature Reviews Psychology. 2/6, 2023. A Stanford University child psychologist offers another recognition that an intrinsic affinity seems to be apparent between such ChatGPT resources and how children achieve literacy and factual comprehension. It is then recommended that an integrative accord between the two general approaches would be beneficial. See also Variability and Consistency in Early Language Learning: The Wordbank Project by MF and colleagues (MIT Press, 2021).

Large language models show remarkable capacities, but it is unclear what abstractions support their behaviour. Methods from developmental psychology can help researchers to understand the representations used by these models, complementing standard computational approaches — and perhaps leading to insights about the nature of mind.

Gencaga, Deniz. Information-Theoretic Approaches in Deep Learning. Entropy. August, 2018. An Antalya Bilim University, Turkey informatics engineer proposes a special issue with this title. It has a June 30, 2019 closing date for submissions.

Deep Learning (DL) has revolutionized machine learning, especially in the last decade. As a benefit of this unprecedented development, we are capable of working with very large Neural Networks (NNs), composed of multiple layers (Deep Neural Networks), in many applications, such as object recognition-detection, speech recognition and natural language processing. Although many Convolutive Neural Network (CNN) and Recurrent Neural Network (RNN) based algorithms have been proposed, a comprehensive theoretical understanding of DNNs remains a major research area. In this Special Issue, we would like to collect papers focusing on both the theory and applications of information-theoretic approaches, such as Mutual Information. The application areas are diverse and include object tracking/detection, speech recognition, natural language processing, neuroscience, bioinformatics, engineering, finance, astronomy, and Earth and space sciences.

George, Daniel and Eliu Antonio Huerta. Deep Neural Networks to Enable Real-Time Multimessenger Astrophysics. arXiv:1701.00008. Along with a pervasive employ of Bayesian (search) probabilities in mid 2010s science, as these University of Illinois astronomers explain, another incisive approach is the use of artificial neural nets as a generic self-organizing complex system of universal application. See also in this cosmic realm Deep Learning for Studies of Galaxy Morphology (1701.05917), and Star-Galaxy Classification using Deep Convolutional Neural Networks (1608.04369). (Spiral Science)

Gillet, Nicolas, et al. Deep Learning from 21cm Images of the Cosmic Dawn. arXiv:1805.02699. We cite as an example of how astrophysicists from Italy, Australia, the USA, and Canada are making use of brain-like neural networks to analyze patterns across the celestial reaches and depth of temporal origins.

Golkov, Vladimir, et al. 3D Deep Learning for Biological Function Prediction from Physical Fields. arXiv:1704.04039. As this turn to an organic AI actively advances, along with an evolutionary setting, a further rooting may be established into physical foundations.

Gomez-Vargas, Isidro, et al. Neural Networks Optimized by Genetic Algorithms in Cosmology. arXiv:2209.02685. We cite this entry by Universidad Nacional Autónoma de México astronomers to illustrate how collaborative scientific frontiers are now turning to and can readily apply a deep computational intelligence to study and quantify, so it seems, every sidereal breadth and depth. As the quotes say, the authors here meld ANNs with GA (aka evolutionary) capabilities to achieve more insightful synthesis.

Artificial neural networks have been successfully applied in cosmological studies over the the past decade, due to their ability of modeling large datasets and complex nonlinear functions. However, their full use needs hyperparameters to be carefully selected, which is a problem. In this paper, we propose to take further advantage of genetic algorithms (GA) to help get to optimum values.. As a proof, we analyze Type Ia Supernovae, dynamic cosmic content, and the Sloan Digital Sky Survey and found that GAs provide a considerable improvement. (Excerpt)

Genetic algorithms, by themselves, have been investigated in cosmology for quasar parameterisation, nonparametric reconstructions, string theory analysis, to name just a few. In other research areas, there are various cases in which artificial neural networks and genetic algorithms have been applied together; for example, in medicin, seismology and in string theory, among others. (2)

Goodfellow, Ian, et al, eds. Deep Learning. Cambridge: MIT Press, 2016. Seen as a most comprehensive introduction, pioneer theorists Goodfellow, Google Brain, with Yoshua Bengio and Aaron Courville, University of Montreal, provide a technical text on this broad, multi-faceted revolution from inept machine methods to animate algorithms as the way our brains learn and think.

Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. (Publisher)

Gopnik, Alison. An AI that Knows the World like Children Do. Scientific American. June, 2017. The UC Berkeley child psychologist and philosopher achieves a good synopsis of this recent turn to a viable computational intelligence by modeling it on actual human cognitive processes than machines. The best teacher may then be how each of us were able to gain knowledge in the first place. Two paths to AI’s resurgence are thus identified as a bottom up deep iterative learning and top down Bayesian probabilities.

Previous   1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10  Next