(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

II. Pedia Sapiens: A Planetary Progeny Comes to Her/His Own Actual Factual Twintelligent Knowledge

1. Earthificial Cumulative Cognizance: AI Large Language Models Learn Much Like a Child

Frank, Michael. Baby steps in evaluating the capacities of large language models.. Nature Reviews Psychology. 2/6, 2023. A Stanford University child psychologist offers another recognition that an intrinsic affinity seems to be apparent between such ChatGPT resources and how children achieve literacy and factual comprehension. It is then recommended that an integrative accord between the two general approaches would be beneficial. See also Variability and Consistency in Early Language Learning: The Wordbank Project by MF and colleagues (MIT Press, 2021).

Large language models show remarkable capacities, but it is unclear what abstractions support their behaviour. Methods from developmental psychology can help researchers to understand the representations used by these models, complementing standard computational approaches — and perhaps leading to insights about the nature of mind.

Gencaga, Deniz. Information-Theoretic Approaches in Deep Learning. Entropy. August, 2018. An Antalya Bilim University, Turkey informatics engineer proposes a special issue with this title. It has a June 30, 2019 closing date for submissions.

Deep Learning (DL) has revolutionized machine learning, especially in the last decade. As a benefit of this unprecedented development, we are capable of working with very large Neural Networks (NNs), composed of multiple layers (Deep Neural Networks), in many applications, such as object recognition-detection, speech recognition and natural language processing. Although many Convolutive Neural Network (CNN) and Recurrent Neural Network (RNN) based algorithms have been proposed, a comprehensive theoretical understanding of DNNs remains a major research area. In this Special Issue, we would like to collect papers focusing on both the theory and applications of information-theoretic approaches, such as Mutual Information. The application areas are diverse and include object tracking/detection, speech recognition, natural language processing, neuroscience, bioinformatics, engineering, finance, astronomy, and Earth and space sciences.

George, Daniel and Eliu Antonio Huerta. Deep Neural Networks to Enable Real-Time Multimessenger Astrophysics. arXiv:1701.00008. Along with a pervasive employ of Bayesian (search) probabilities in mid 2010s science, as these University of Illinois astronomers explain, another incisive approach is the use of artificial neural nets as a generic self-organizing complex system of universal application. See also in this cosmic realm Deep Learning for Studies of Galaxy Morphology (1701.05917), and Star-Galaxy Classification using Deep Convolutional Neural Networks (1608.04369). (Spiral Science)

Gifford, Alessandro, et al. The Algonauts Project 2025 Challenge.. arXiv:2501.00504. Freie Universität Berlin, Goethe Universität Frankfurt, Université de Montréal, Montréal and MIT neuroscientists including Radoslaw Cichy describe an array of innovate AI adventures as a way to better understand how brains perform and may interface with computational media. An example would Automating the Search for Artificial Life with Foundation Models at pub.sakana.ai/asal, second quote.


There is growing symbiosis between artificial and biological intelligence sciences: neural principles inspire new intelligent machines, which are in turn used to advance our theoretical understanding of the brain. Here we introduce the 2025 edition: How the Human Brain Makes Sense of Multimodal Movies. In collaboration with the Courtois Project on Neuronal Modelling, our aim is to bring forth a new generation of brain encoding models that generalize well by training them on large datasets of fMRI responses. (Excerpt)

Artificial Life (ALife) has not yet integrated FMs which presents an opportunity to move beyond manual design and trial-and-error to discover of lifelike simulations. The proposed approach, called Automated Search for Artificial Life (ASAL), (1) finds simulations that produce target phenomena, (2) that generate temporally open-ended novelty, and (3) illuminates an entire space of interestingly diverse versions. A major result is finding novel Lenia and Boids lifeforms, as well as open-ended cellular automata. (Sanaka MIT)

A foundation model is a deep learning model that is trained on vast datasets so it can be applied across a wide range of use cases. Generative AI applications like Large Language Models are examples. (Wikipedia)

Gillet, Nicolas, et al. Deep Learning from 21cm Images of the Cosmic Dawn. arXiv:1805.02699. We cite as an example of how astrophysicists from Italy, Australia, the USA, and Canada are making use of brain-like neural networks to analyze patterns across the celestial reaches and depth of temporal origins.

Golkov, Vladimir, et al. 3D Deep Learning for Biological Function Prediction from Physical Fields. arXiv:1704.04039. As this turn to an organic AI actively advances, along with an evolutionary setting, a further rooting may be established into physical foundations.

Gomez-Vargas, Isidro, et al. Neural Networks Optimized by Genetic Algorithms in Cosmology. arXiv:2209.02685. We cite this entry by Universidad Nacional Autónoma de México astronomers to illustrate how collaborative scientific frontiers are now turning to and can readily apply a deep computational intelligence to study and quantify, so it seems, every sidereal breadth and depth. As the quotes say, the authors here meld ANNs with GA (aka evolutionary) capabilities to achieve more insightful synthesis.

Artificial neural networks have been successfully applied in cosmological studies over the the past decade, due to their ability of modeling large datasets and complex nonlinear functions. However, their full use needs hyperparameters to be carefully selected, which is a problem. In this paper, we propose to take further advantage of genetic algorithms (GA) to help get to optimum values.. As a proof, we analyze Type Ia Supernovae, dynamic cosmic content, and the Sloan Digital Sky Survey and found that GAs provide a considerable improvement. (Excerpt)

Genetic algorithms, by themselves, have been investigated in cosmology for quasar parameterisation, nonparametric reconstructions, string theory analysis, to name just a few. In other research areas, there are various cases in which artificial neural networks and genetic algorithms have been applied together; for example, in medicin, seismology and in string theory, among others. (2)

Gonçalves, Bernado. Passed the Turing Test: Living in Turing Futures. Intelligent Computing. Vol 3/Art 0102, 2024. We note this entry by a Center for Artificial Intelligence, University of São Paulo computer scientist for its content and his perception that such Turing devices would likely be akin to youngsters as they assimilate their environs.

The world has seen the emergence of machines based on pretrained models, transformers for their ability to produce various types of content, text, images, audio, and synthetic data. Their intelligence grows as they learn from experience, and to ordinary people, they can appear human-like in conversation. This means that they can pass the Turing test and that we are now living in one of the futures where machines can pass for what they are not. However, the learning machines that Turing imagined would pass his imitation tests were they based on the low-energy human cortex. They would be raised like human children and naturally learn the ability to deceive an observer. (Excerpt)

Bernardo Gonçalves For the past six years, my research has focused on the future of AI as envisioned by Alan Turing, the foundations and ethics of AI, and the future of machines in society & nature. I have 12+ years of R & D experience in AI and data-centric systems in academia and industry. I am a Researcher at the Center for AI (C4AI) of the University of São Paulo and a Visiting Fellow in History and Philosophy of Science at Cambridge University.

Goodfellow, Ian, et al, eds. Deep Learning. Cambridge: MIT Press, 2016. Seen as a most comprehensive introduction, pioneer theorists Goodfellow, Google Brain, with Yoshua Bengio and Aaron Courville, University of Montreal, provide a technical text on this broad, multi-faceted revolution from inept machine methods to animate algorithms as the way our brains learn and think.

Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. (Publisher)

Gopnik, Alison. An AI that Knows the World like Children Do. Scientific American. June, 2017. The UC Berkeley child psychologist and philosopher achieves a good synopsis of this recent turn to a viable computational intelligence by modeling it on actual human cognitive processes than machines. The best teacher may then be how each of us were able to gain knowledge in the first place. Two paths to AI’s resurgence are thus identified as a bottom up deep iterative learning and top down Bayesian probabilities.

Gottweis, Juraj, et al. Towards an AI co-scientist. arXiv:2502.18864. We cite this entry by thirty-four Google Cloud AI Research, Google DeepMind, Houston Methodist, Sequome, Imperial College London and Stanford University scholars as another 2025 version which advocates and describes an intentional, dedicated reciprocity between human persons and computational algorithms.

Scientific discovery advances by novel hypotheses that undergo experimental validation. Here we introduce an AI co-scientist as a multi-agent system built on Gemini 2.0 to help formulate research proposals based on prior evidence and aligned with human guidance. Key features are task execution for flexible scaling and a tournament evolution process. While general purpose, we focus on three biomedical areas: drug repurposing, novel target discovery, and anti-microbial resistance. Our positive results in each instance demonstrate the potential to augment biomedicine and scientific research and usher an era of AI empowered scientists. (Excerpt)

Graupe, Daniel. Deep Learning Neural Networks. Singapore: World Scientific, 2016. . A senior University of Illinois electrical engineer provides a technical intro to their basic concepts, scope, methods, back-propagation, convolutional, recurrence aspects and much more, along with many case studies.

Deep Learning Neural Networks is the fastest growing field in machine learning. It serves as a powerful computational tool for solving prediction, decision, diagnosis, detection and decision problems based on a well-defined computational architecture. It has been successfully applied to a broad field of applications ranging from computer security, speech recognition, image and video recognition to industrial fault detection, medical diagnostics and finance. This work is intended for use as a one-semester graduate-level university text and as a textbook for research and development establishments in industry, medicine and financial research.

Previous   1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10  Next  [More Pages]