(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

II. Pedia Sapiens: A Planetary Progeny Comes to Her/His Own Actual Factual Knowledge

1. Earthificial Cumulative Cognizance: AI Large Language Models Learn Much Like a Child

Chen, Boyuan, et al. Discovering State Variables Hidden In Experimental Data. arXiv:2112.10755. This entry by Columbia University computer scientists led by Hod Lipson offers a good survey of how this computational endeavor began and goes forth today. It opens by noting historic studies of physical laws and motions as a search for elusive values. From 2021, it is advised that as not before bovel AI methods can achieve deeper analyses so as to discern their presence in dynamic systems such as reaction-diffusion. See also Distilling Free-Form Natural Laws from Experimental Data by Michael Schmidt and Hod Lipson in Science (324/5923, 2009, second Abstract).

All Physical laws are based on relationships between state variables which give a description of the relevant system dynamics. However, the process of identifying the hidden state variables has so far resisted AI techniques. We propose a new principle to find how many state variables an observed system is likely to have, and what these variables might be. Without any prior knowledge of the underlying physics, our algorithm discovers the intrinsic dimension of the observed dynamics and identifies sets of state variables. We suggest that this approach could help catalyze the understanding, prediction and control of increasingly complex systems. (Excerpt)

For centuries, scientists have attempted to identify and document analytical laws that underlie physical phenomena. Despite much computing power, the process of finding natural laws and their equations has resisted automation. We need to define an algorithm which can insightfully correlate observed data sets. Without prior knowledge about physics, kinematics, or geometry, our algorithm discovered Hamiltonians, Lagrangians, and momentum conservation.. (2009 Abstract)

Ching, Travers, et al. Opportunities and Obstacles for Deep Learning in Biology and Medicine. Journal of the Royal Society Interface. Vol. 14/Iss. 141, 2018. Some 40 researchers from institutes, laboratories and hospitals in the USA, Canada and the UK well survey current applications, potentials, and problems for this cerebral-based AI revolution. Or as Siddhartha Muckerjee, MD wrote in the New Yorker last year: “the algorithm will see you now.”

Some 40 researchers from institutes, laboratories and hospitals in the USA, Canada and the UK well survey current applications, potentials, and problems for this cerebral-based AI revolution. Or as Siddhartha Muckerjee, MD wrote in the New Yorker last year: “the algorithm will see you now.”

Ciliberto, Carlo, et al. Quantum Machine Learning. Proceedings of the Royal Society A. 474/0551, 2017. University College London and MPI Intelligent Systems researchers provide a state of the science and art as the AI revolution by way of its novel biological, neural net basis becomes widely applicable. Here quantum phenomena, as it becomes affine with classical macro-modes, seems to bode for a cosmic connectome.

Cranmer, Miles, et al. Discovering Symbolic Models from Deep Learning with Inductive Biases. arXiv;2006.11287. Seven Princeton U., Deep Mind, London, NYU, and Flatiron Institute, NYC computer specialists articulate yet another effective machine procedure as our learning (and hopefully thinking) planet begins to spiral up to a prodigious Earthropic sapiens phase.

We develop a general approach to distill symbolic representations of a learned deep model by introducing strong inductive biases. We focus on Graph Neural Networks (GNNs) that encourage sparse latent representations an apply symbolic regression to learned model components to extract physical relations. We go on to study a cosmology sample of detailed dark matter and are discover a analytic formula that can predict the concentration of dark matter from the mass distribution of nearby cosmic structures. Our approach offers new ways to interpret neural networks and revealing physical principles from their representations. (Abstract)

Das Sarma, Sankar, et al. Machine Learning Meets Quantum Physics. Physics Today. March, 2019. In a “most read” journal paper, University of Maryland and Tsinghua University, Beijing computational theorists show how these disparate fields actually have common qualities, which can serve to inform, meld and advance each endeavor. The graphic article goes on to compare affine neural network and quantum states so as to join “classical and quantum” phases, especially for communicative and computational purposes.

Dawid, Anna. Modern applications of machine learning in quantum sciences. arXiv:2204.04198. As this planetary spiral ascent and method proceeds, some thirty contributors across Europe and the USA put together a 288 page, 730 reference volume which covers this entire frontier field to date. In regard, the book is an awesome testimony to our Earthuman abilities to delve deep and scan afar so an ecosmic genesis uniVerse can be able to describe and learn all about itself.

In this book, we provide a comprehensive introduction to the most recent advances in the application of machine learning methods in quantum sciences. We cover the use of deep learning and kernel methods in supervised, unsupervised, and reinforcement learning algorithms for phase classification, representation of many-body quantum states, quantum feedback control, and quantum circuits optimization. Moreover, we introduce and discuss more specialized topics such as differentiable programming, generative models, statistical approach to machine learning, and quantum machine learning. (Abstract)

In the last decade, Machine Learning has been intensively studied and has revolutionized many topics, including computer vision and natural language processing. The new toolbox and set of ideas coming from this field have also found successful applications in the sciences. In particular, ML and DL have been used to tackle problems in the physical and chemical sciences, both in the classical and quantum regimes from particle physics, fluid dynamics, cosmology, many-body quantum systems to quantum computing and quantum information theory [697]. (234)

Hi! I’m a research fellow at the Center of Computational Quantum Physics of the Flatiron Institute in New York, happily playing with interpretable machine learning for science. I defended my joint Ph.D. degree in physics and photonics in September 2022 at the University of Warsaw and ICFO – The Institute of Photonic Sciences, Spain. Before that, I did my MSc in quantum chemistry and BSc in biotechnology at the University of Warsaw. (Anna Dawid)

De Marzo, Giordano. et al.. Emergence of ScaleFree Networks in Social Interactions among Large Language Models. arXiv:2312.06619.. arXiv:2312.06619.. Senior theorists at Centro Ricerche Enrico Fermi, Rome and Complexity Science Hub, Vienna (Luciano Pietronero, David Garcia) scope out how a working integration between these conceptual domains (ABMs and LLMs) might be achieved which are currently meshing with each other. In some way they have their own affinities and can result in novel features. This entry is also keyed with the special The Psychology of Collectives issues in Perspectives on Psychological Science for December 2023.

Scale-free networks are iconic examples of emergent behavior such as online social media in which users can follow each other. By analyzing the interactions of many generative agents using GPT3.5-turbo as a language model, we show their ability to not only mimic human linguistic behavior but with collective societal phenomena. We show how renaming agents allows the model to generate a range of realistic scale-free networks. (Excerpts)

Deng, Dong-Ling, et al. Quantum Entanglement in Neural Network States. Physical Review X. 7/021021, 2017. University of Maryland, and Fudan University, Shanghai, theorists identify, develop and extol the practical affinity of neural network cognitive geometries and operational workings with quantum phase phenomena. If to reflect, how might its application to this fundamental cosmic realm imply an intrinsic cerebral character and content? A step further, if global human beings can so readily plumb such depths, and intentionally apply these basic, principled methods, could a creative universe intend for their passage to our cognizance and continuance?

Machine learning, one of today’s most rapidly growing interdisciplinary fields, promises an unprecedented perspective for solving intricate quantum many-body problems. Understanding the physical aspects of the representative artificial neural-network states has recently become highly desirable in the applications of machine-learning techniques to quantum many-body physics. In this paper, we explore the data structures that encode the physical features in the network states by studying the quantum entanglement properties, with a focus on the restricted-Boltzmann-machine (RBM) architecture. Our results uncover the unparalleled power of artificial neural networks in representing quantum many-body states regardless of how much entanglement they possess, which paves a novel way to bridge computer-science-based machine-learning techniques to outstanding quantum condensed-matter physics problems. (Abstract excerpts)

Dror1, Iddo. The Science of Deep Learning.. Cambridge: Cambridge University Press,, 2022. The volume seems to be a current survey which covers a wide range of aspects.

The book begins by covering the foundations of deep learning, followed by key learning architectures. It includes topics such as Transformers, graph neural networks, variational autoencoders. deep reinforcement learning, with a broad range of applications. An accompanying website provides complementary code and hundreds of exercises with solutions.

Iddo Drori is a faculty in Computer Science at Boston University, and adjunct Professor at Columbia University. He was a lecturer at MIT EECS, and at Cornell University in operations research and information engineering.

Dufourq, Emmanuel and Bruce Bassett. EDEN: Evolutionary Deep Networks for Efficient Machine Learning. arXiv:1709.09161. University of Cape Town, African Institute for Mathematical Sciences, theorists add a temporal depth to neural net computations by informing and integrating them with evolutionary phenomena. By turns, life’s quickening emergence might be likened to a grand educative endeavor. See also Evolving Deep Neural Networks at 1703.00548 for a companion effort.

Deep neural networks continue to show improved performance with increasing depth, an encouraging trend that implies an explosion in the possible permutations of network architectures and hyperparameters for which there is little intuitive guidance. To address this increasing complexity, we propose Evolutionary DEep Networks (EDEN), a computationally efficient neuro-evolutionary algorithm which interfaces to any deep neural network platform, such as TensorFlow. Evaluation of EDEN across seven image and sentiment classification datasets shows that it reliably finds good networks -- and in three cases achieves state-of-the-art results -- even on a single GPU, in just 6-24 hours. Our study provides a first attempt at applying neuro-evolution to the creation of 1D convolutional networks for sentiment analysis including the optimisation of the embedding layer. (Abstract)

Dunjko, Vedran and Hans Briegel. Machine Learning & Artificial Intelligence in the Quantum Domain. Reports on Progress in Physics. 81/7, 2018. University of Innsbruck physicists scope out how these dual approaches may cross-inform and merge so as to be fruitfully applied to this deepest frontier. See also, for example, Quantum Neural Network States by Zhih-Ahn Jia, et al atarXiv:1808.10601.

Quantum information technologies and intelligent learning systems are both emergent technologies that are likely to have a transformative impact on our society. In a growing body of recent work, researchers have been probing the question of the extent to which these fields can indeed learn and benefit from each other. Quantum ML explores the interaction between quantum computing and ML, investigating how results and techniques from one field can be used to solve the problems of the other. Beyond the topics of mutual enhancement—exploring what ML/AI can do for quantum physics and vice versa—researchers have also broached the fundamental issue of quantum generalizations of learning and AI concepts. This deals with questions of the very meaning of learning and intelligence in a world that is fully described by quantum mechanics. (Abstract excerpts)

Gencaga, Deniz. Information-Theoretic Approaches in Deep Learning. Entropy. August, 2018. An Antalya Bilim University, Turkey informatics engineer proposes a special issue with this title. It has a June 30, 2019 closing date for submissions.

Deep Learning (DL) has revolutionized machine learning, especially in the last decade. As a benefit of this unprecedented development, we are capable of working with very large Neural Networks (NNs), composed of multiple layers (Deep Neural Networks), in many applications, such as object recognition-detection, speech recognition and natural language processing. Although many Convolutive Neural Network (CNN) and Recurrent Neural Network (RNN) based algorithms have been proposed, a comprehensive theoretical understanding of DNNs remains a major research area. In this Special Issue, we would like to collect papers focusing on both the theory and applications of information-theoretic approaches, such as Mutual Information. The application areas are diverse and include object tracking/detection, speech recognition, natural language processing, neuroscience, bioinformatics, engineering, finance, astronomy, and Earth and space sciences.

Previous   1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9  Next