(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

II. Pedia Sapiens: A Planetary Progeny Comes to Her/His Own Actual Factual Twintelligent Knowledge

1. Earthificial Cumulative Cognizance: AI Large Language Models Learn Much Like a Child

De Marzo, Giordano. et al.. Emergence of ScaleFree Networks in Social Interactions among Large Language Models. arXiv:2312.06619.. arXiv:2312.06619.. Senior theorists at Centro Ricerche Enrico Fermi, Rome and Complexity Science Hub, Vienna (Luciano Pietronero, David Garcia) scope out how a working integration between these conceptual domains (ABMs and LLMs) might be achieved which are currently meshing with each other. In some way they have their own affinities and can result in novel features. This entry is also keyed with the special The Psychology of Collectives issues in Perspectives on Psychological Science for December 2023.

Scale-free networks are iconic examples of emergent behavior such as online social media in which users can follow each other. By analyzing the interactions of many generative agents using GPT3.5-turbo as a language model, we show their ability to not only mimic human linguistic behavior but with collective societal phenomena. We show how renaming agents allows the model to generate a range of realistic scale-free networks. (Excerpts)

Deng, Dong-Ling, et al. Quantum Entanglement in Neural Network States. Physical Review X. 7/021021, 2017. University of Maryland, and Fudan University, Shanghai, theorists identify, develop and extol the practical affinity of neural network cognitive geometries and operational workings with quantum phase phenomena. If to reflect, how might its application to this fundamental cosmic realm imply an intrinsic cerebral character and content? A step further, if global human beings can so readily plumb such depths, and intentionally apply these basic, principled methods, could a creative universe intend for their passage to our cognizance and continuance?

Machine learning, one of today’s most rapidly growing interdisciplinary fields, promises an unprecedented perspective for solving intricate quantum many-body problems. Understanding the physical aspects of the representative artificial neural-network states has recently become highly desirable in the applications of machine-learning techniques to quantum many-body physics. In this paper, we explore the data structures that encode the physical features in the network states by studying the quantum entanglement properties, with a focus on the restricted-Boltzmann-machine (RBM) architecture. Our results uncover the unparalleled power of artificial neural networks in representing quantum many-body states regardless of how much entanglement they possess, which paves a novel way to bridge computer-science-based machine-learning techniques to outstanding quantum condensed-matter physics problems. (Abstract excerpts)

DiPaolo, Laura, et al. Active inference goes to school: the importance of active learning in the age of large language models. Philosophical Transactions of the Royal Society B. August, 2024. In an article for a Minds in movement: embodied cognition in the age of artificial intelligence issue, this entry by University of Sussex cognitive scientists including Axel Constant and Andy Clark is noted for its meld of embodied thinking with free energies and also for a turn to educational approaches as an appropriate way to try to understand and manage these voluminous AI faculties. In specific regard, the widely used (Maria 1870-1952) Montessori method is extensively reviewed as especially suitable because of its intrinsic open creativity which engages and empowers children in group settings with hands-on activities. See also Differences in spatiotemporal brain network dynamics of Montessori and traditionally schooled students by Paola Zanchi, et al in npj Science of Learning (Vol. 9/Art. 45, 2024, herein).

Human learning often involves embodied interactions with the material world. But today this means an increasing amount of generative artificial intelligence content. Here we ask how to assimilate these resources into our educational practices. Our focus will be on approaches that foster exploration and interaction such as the carefully organized settings of Montessori methods. We surmise that generative AI should be a natural feature in these learning environs to facilitate sequences of prediction error and enabling trajectories of self-correction. (Excerpt)

Dror1, Iddo. The Science of Deep Learning.. Cambridge: Cambridge University Press,, 2022. The volume seems to be a current survey which covers a wide range of aspects.

The book begins by covering the foundations of deep learning, followed by key learning architectures. It includes topics such as Transformers, graph neural networks, variational autoencoders. deep reinforcement learning, with a broad range of applications. An accompanying website provides complementary code and hundreds of exercises with solutions.

Iddo Drori is a faculty in Computer Science at Boston University, and adjunct Professor at Columbia University. He was a lecturer at MIT EECS, and at Cornell University in operations research and information engineering.

Dufourq, Emmanuel and Bruce Bassett. EDEN: Evolutionary Deep Networks for Efficient Machine Learning. arXiv:1709.09161. University of Cape Town, African Institute for Mathematical Sciences, theorists add a temporal depth to neural net computations by informing and integrating them with evolutionary phenomena. By turns, life’s quickening emergence might be likened to a grand educative endeavor. See also Evolving Deep Neural Networks at 1703.00548 for a companion effort.

Deep neural networks continue to show improved performance with increasing depth, an encouraging trend that implies an explosion in the possible permutations of network architectures and hyperparameters for which there is little intuitive guidance. To address this increasing complexity, we propose Evolutionary DEep Networks (EDEN), a computationally efficient neuro-evolutionary algorithm which interfaces to any deep neural network platform, such as TensorFlow. Evaluation of EDEN across seven image and sentiment classification datasets shows that it reliably finds good networks -- and in three cases achieves state-of-the-art results -- even on a single GPU, in just 6-24 hours. Our study provides a first attempt at applying neuro-evolution to the creation of 1D convolutional networks for sentiment analysis including the optimisation of the embedding layer. (Abstract)

Dunjko, Vedran and Hans Briegel. Machine Learning & Artificial Intelligence in the Quantum Domain. Reports on Progress in Physics. 81/7, 2018. University of Innsbruck physicists scope out how these dual approaches may cross-inform and merge so as to be fruitfully applied to this deepest frontier. See also, for example, Quantum Neural Network States by Zhih-Ahn Jia, et al atarXiv:1808.10601.

Quantum information technologies and intelligent learning systems are both emergent technologies that are likely to have a transformative impact on our society. In a growing body of recent work, researchers have been probing the question of the extent to which these fields can indeed learn and benefit from each other. Quantum ML explores the interaction between quantum computing and ML, investigating how results and techniques from one field can be used to solve the problems of the other. Beyond the topics of mutual enhancement—exploring what ML/AI can do for quantum physics and vice versa—researchers have also broached the fundamental issue of quantum generalizations of learning and AI concepts. This deals with questions of the very meaning of learning and intelligence in a world that is fully described by quantum mechanics. (Abstract excerpts)

Eacersall, Douglas, et al. The ETHICAL Framework for Responsible Generative AI Research Use. arXiv:2501.09021.. Fifteen cultural scholars mainly in Australia along with Canada, Malayasia and the Philippines post a thorough cast of behavioral standards and regulations so to insure at this early stage that trustworthy results are achieved.


The rapid adoption of generative artificial intelligence (GenAI) presents both many opportunities and ethical issues that should be carefully navigated. This paper develops the ETHICAL guide as a practical guide for responsible GenAI use by way of seven key principles: Examine policies and guidelines, Think about social impacts, Harness understanding of the technology, Indicate use, Critically engage with outputs, Access secure versions, and Look at user agreements. (Excerpt)

The ETHICAL Framework presented in this article stands as a foundational resource for researchers navigating the ethical challenges associated with GenAI. While some guidelines exist, this framework progresses beyond awareness to practical action. The ETHICAL Framework explicitly equips researchers with actionable principles, providing clear guidance on ethical GenAI use in research, thereby supporting both integrity and impact. (17)

Farisco, Michele, et al. Is artificial consciousness achievable? Lessons from the human brain. arXiv:2405.04540. We cite these neuroscience considerations of a potential AI sentience by MF, Uppsala University, Kathinka Evers, Biology and Molecular Genetics Institute, Italy and Jean-Pierre Changeux, Institut Pasteur, Paris for themselves and because the third author is a renowned octogenarian authority (search).

We consider the question of developing artificial consciousness from an evolutionary perspective, taking the sentient human brain as a reference. Several structural and functional features that appear to reach human-like complex awareness are identified which AI research needs to take into account. Even if AI is limited in its ability to emulate human consciousness for both intrinsic (structural and architectural) and extrinsic (scientific and technological knowledge) reasons, taking inspiration from cerebral attributes is a strategy towards perceptive AI. Therefore, we recommend neuroscience-inspired caution in talking about artificial consciousness. In regard, we propose to specify what is common and what differs in AI conscious processing from our full human experience. (Abstract)

Frank, Michael. Baby steps in evaluating the capacities of large language models.. Nature Reviews Psychology. 2/6, 2023. A Stanford University child psychologist offers another recognition that an intrinsic affinity seems to be apparent between such ChatGPT resources and how children achieve literacy and factual comprehension. It is then recommended that an integrative accord between the two general approaches would be beneficial. See also Variability and Consistency in Early Language Learning: The Wordbank Project by MF and colleagues (MIT Press, 2021).

Large language models show remarkable capacities, but it is unclear what abstractions support their behaviour. Methods from developmental psychology can help researchers to understand the representations used by these models, complementing standard computational approaches — and perhaps leading to insights about the nature of mind.

Gencaga, Deniz. Information-Theoretic Approaches in Deep Learning. Entropy. August, 2018. An Antalya Bilim University, Turkey informatics engineer proposes a special issue with this title. It has a June 30, 2019 closing date for submissions.

Deep Learning (DL) has revolutionized machine learning, especially in the last decade. As a benefit of this unprecedented development, we are capable of working with very large Neural Networks (NNs), composed of multiple layers (Deep Neural Networks), in many applications, such as object recognition-detection, speech recognition and natural language processing. Although many Convolutive Neural Network (CNN) and Recurrent Neural Network (RNN) based algorithms have been proposed, a comprehensive theoretical understanding of DNNs remains a major research area. In this Special Issue, we would like to collect papers focusing on both the theory and applications of information-theoretic approaches, such as Mutual Information. The application areas are diverse and include object tracking/detection, speech recognition, natural language processing, neuroscience, bioinformatics, engineering, finance, astronomy, and Earth and space sciences.

George, Daniel and Eliu Antonio Huerta. Deep Neural Networks to Enable Real-Time Multimessenger Astrophysics. arXiv:1701.00008. Along with a pervasive employ of Bayesian (search) probabilities in mid 2010s science, as these University of Illinois astronomers explain, another incisive approach is the use of artificial neural nets as a generic self-organizing complex system of universal application. See also in this cosmic realm Deep Learning for Studies of Galaxy Morphology (1701.05917), and Star-Galaxy Classification using Deep Convolutional Neural Networks (1608.04369). (Spiral Science)

Gifford, Alessandro, et al. The Algonauts Project 2025 Challenge.. arXiv:2501.00504. Freie Universität Berlin, Goethe Universität Frankfurt, Université de Montréal, Montréal and MIT neuroscientists including Radoslaw Cichy describe an array of innovate AI adventures as a way to better understand how brains perform and may interface with computational media. An example would Automating the Search for Artificial Life with Foundation Models at pub.sakana.ai/asal, second quote.


There is growing symbiosis between artificial and biological intelligence sciences: neural principles inspire new intelligent machines, which are in turn used to advance our theoretical understanding of the brain. Here we introduce the 2025 edition: How the Human Brain Makes Sense of Multimodal Movies. In collaboration with the Courtois Project on Neuronal Modelling, our aim is to bring forth a new generation of brain encoding models that generalize well by training them on large datasets of fMRI responses. (Excerpt)

Artificial Life (ALife) has not yet integrated FMs which presents an opportunity to move beyond manual design and trial-and-error to discover of lifelike simulations. The proposed approach, called Automated Search for Artificial Life (ASAL), (1) finds simulations that produce target phenomena, (2) that generate temporally open-ended novelty, and (3) illuminates an entire space of interestingly diverse versions. A major result is finding novel Lenia and Boids lifeforms, as well as open-ended cellular automata. (Sanaka MIT)

A foundation model is a deep learning model that is trained on vast datasets so it can be applied across a wide range of use cases. Generative AI applications like Large Language Models are examples. (Wikipedia)

Previous   1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10  Next  [More Pages]