(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

II. Pedia Sapiens: A Planetary Progeny Comes to Her/His Own Actual Factual Knowledge

1. Earthificial Cumulative Cognizance: AI Large Language Models Learn Much Like a Child

Manzalino, Antonio. Complex Deep Learning with Quantum Optics. Quantum Reports. 1/1, 2019. In this new MDPI online journal, a senior manager in the Innovation Dept. of Telecom Italia Mobile (TIM), bio below, advances the frontiers of this current assimilation of a lively quantum cosmos with human neural net cognizance. See also, e.g., a cited reference, Quantum Fields as Deep Learning, by Jae-Weon Lee at arXiv:1708.07408. While a prior physics mindset worries over an opaque strangeness, into these later 2010s, via instant global collaborations, a profound new understanding and treatment becomes possible.

The rapid push towards telecommunications infrastructures such as 5G capacity and the Internet drives a strong interest for artificial intelligence (AI) methods, systems, and networks. Processing big data to infer patterns at high speeds with low power consumption is a central technological challenge. Today, an emerging research field rooted in quantum optics along with deep neural networks (DNNs) and nanophotonics are cross-informing each other. This paper elaborates on these topics and proposes a theoretical architecture for a Complex DNN made from programmable metasurfaces. An example is provided which shows a correspondence between the equivariance of convolutional neural networks and the invariance principle of gauge transformations. (Abstract)

Antonio Manzalini received the M. Sc. Degree in Electronic Engineering from the Politecnico of Turin and the Ph.D on Computer Science and Networks from Télécom SudParis and Université Pierre & Marie Curie – Sorbonne. His results have been published in more than 130 of technical papers. His interests in TIM concern SDN and NFV, Cloud vs Multi-Access Edge Computing for 5G, Future Internet and Quantum Communications.

Marchetti, Tomasso, et al. An Artificial Neural Network to Discovery Hypervelocity Stars. arXiv:1704.07990. An eight member European astrophysicist team finds this cerebral procedure to be a fruitful way to distill results of the myriad data findings of the Gaia space telescope mission. Once again, we note how such a collaboration may appear as a worldwide sapiensphere proceeding to learn on her/his own.

The paucity of hypervelocity stars (HVSs) known to date has severely hampered their potential to investigate the stellar population of the Galactic Centre and the Galactic Potential. The first Gaia data release gives an opportunity to increase the current sample. The challenge is of course the disparity between the expected number of hypervelocity stars and that of bound background stars (around 1 in 106). We have applied a novel data mining algorithm based on machine learning techniques, an artificial neural network, to the Tycho-Gaia astrometric solution (TGAS) catalogue. (Abstract excerpt)

Marcus, Gary. The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence. arXiv:2002.06177. The NYU polypsychologist and founder of Robust AI has rightly situated himself as a reality checker and quality control moderator as this hyper-active endeavor moves into every aspect that it can. See also his Rebooting AI: Building Artificial Intelligence We Can Trust 2019 book.

Recent research in artificial intelligence and machine learning has largely emphasized general-purpose learning and ever-larger training sets and more and more compute. In contrast, I propose a hybrid, knowledge-driven, reasoning-based approach, centered around cognitive models, that could provide the substrate for a richer, more robust AI than is currently possible.

Mathuriya, Amrita, et al. CosmoFlow: Using Deep Learning to Learn the Universe at Scale. arXiv:1808.04728. As the brain-based AI revolution proceeds, seventeen authors from Intel, LBNL, Cray and UC Berkeley scope out their neural network applications, as being done everywhere else, across the celestial raiment. Indeed, as this realm becomes similarly amenable, one might get the impression that the whole cosmos is somehow cerebral, or genomic in nature.

Deep learning is a promising tool to determine the physical model that describes our universe. To handle the considerable computational cost of this problem, we present CosmoFlow: a highly scalable deep learning application built on top of the TensorFlow framework. CosmoFlow uses efficient implementations of 3D convolution and pooling primitives, together with improvements in threading for many element-wise operations, to improve training performance on Intel(C) Xeon Phi(TM) processors. We also utilize the Cray PE Machine Learning Plugin for efficient scaling to multiple nodes. To our knowledge, this is the first large-scale science application of the TensorFlow framework at supercomputer scale with fully-synchronous training. (Abstract)

Deep Learning for Cosmology: The nature of dark energy is one of the most exciting and fundamental questions facing scientists today. Dark energy is the unknown force that is driving the accelerated expansion of the universe, and is the subject of several current and future experiments that will survey the sky in multiple wavelengths. We cannot measure dark energy directly - we can only observe the effect it has on the observable universe. The interplay of gravity (pulling matter together) and dark energy (expanding space itself) is encoded in the distribution of matter in the universe today. Cosmologists typically characterize this distribution using statistical measures of the structure of matter – its “clumpiness” - in the form of two- or three-point correlation functions or other reduced statistics. Methods that capture all features in the distribution of matter (such as deep learning networks) could give greater insight into the nature of dark energy. (1)

Mehta, Pankaj and David Schwab. An Exact Mapping between the Variational Renormalization Group and Deep Learning. arXiv:1410.3831. We cite this entry because Boston University and Northwestern University physicists show a common affinity between this intelligent neural net method and dynamic physical qualities. So we muse, could it be imagined that cosmic nature may be in some actual way cerebral in kind, engaged in a grand educative experience. One is reminded of the British physicist James Jeans’ 1930 quote The universe begins to look more like a great thought than like a great machine. See also Machine Learning meets Quantum State Preparation at arXiv:1705.00565.

Deep learning is a broad set of techniques that uses multiple layers of representation to automatically learn relevant features directly from structured data. Recently, such techniques have yielded record-breaking results on a diverse set of difficult machine learning tasks in computer vision, speech recognition, and natural language processing. Despite the enormous success of deep learning, relatively little is understood theoretically about why these techniques are so successful at feature learning and compression. Here, we show that deep learning is intimately related to one of the most important and successful techniques in theoretical physics, the renormalization group (RG). RG is an iterative coarse-graining scheme that allows for the extraction of relevant features (i.e. operators) as a physical system is examined at different length scales. Our results suggests that deep learning algorithms may be employing a generalized RG-like scheme to learn relevant features from data. (Abstract)

Metz, Thomas and Robert Ewing, co-chairs. Decoding the Molecular Universe -- Workshop Report. arXiv:2311.11437. We cite this 60 page document by some 30 presenters as a good instance of how such large scientific projects are turning to broad AI capabilities so to empower the way forward.

On August 9-10, 2023, a workshop was held at the Pacific Northwest National Laboratory (PNNL) in Richland, WA of international researchers in metabolomics, chemical ecology, biological threat assessment, computational chemistry, cloud computing, artificial intelligence, and more. The subject was the feasibility of a grand-scale project to create new technologies to identify and quantify enough small molecules so to decode the molecular universe.

Michael, Frank. Baby steps in evaluating the capacities of large language models.. Nature Reviews Psychology. 2/6, 2023. A Stanford University child psychologist offers another recognition that an intrinsic affinity seems to be apparent between such ChatGPT resources and how children achieve literacy and factual comprehension. It is then recommended that an integrative accord between the two general approaches would be beneficial. See also Variability and Consistency in Early Language Learning: The Wordbank Project by MF and colleagues (MIT Press, 2021).

Large language models show remarkable capacities, but it is unclear what abstractions support their behaviour. Methods from developmental psychology can help researchers to understand the representations used by these models, complementing standard computational approaches — and perhaps leading to insights about the nature of mind.

Min, Seonwoo, et al. Deep Learning in Bioinformatics. Briefings in Bioinformatics. 185, 2017. Seoul National University biologists present a tutorial survey of this novel union of profuse big data and deep neural net capabilities as they may serve studies of life’s informational essence. See also, e.g., Deep Learning for Computational Biology in Molecular Systems Biology (12/878, 2016.

Here, we review deep learning in bioinformatics. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies. (Abstract excerpts)

Mitchell, Melanie. What Does It Mean to Align AI with Human Values? Quanta. December 12, 2022. In the midst of the current burst of advances (chatGPT) and algorithm concerns, the Santa Fe Institute complexity professor updates her prior attentions to this looming issue. For example see her 2019 work Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, Giroux, 2019)

Mudhsh, Mohammed and Rolla Almodfer. Arabic Handwritten Alphanumeric Character Recognition Using Very Deep Neural Network. Information. Online August, 2017. We record this entry by Wuhan University of Technology computer scientists to convey a nascent worldwide knowledge which is lately able to parse ancient scriptures by way of 21st century computational methods. A further notice is that a use of these dynamic cerebral architectures could well imply a global brain coming to its own retrospective and prospect.

The traditional algorithms for recognizing handwritten alphanumeric characters are dependent on hand-designed features. In recent days, deep learning techniques have brought about new breakthrough technology for pattern recognition applications, especially for handwritten recognition. However, deeper networks are needed to deliver state-of-the-art results in this area. In this paper, inspired by the success of the very deep state-of-the-art VGGNet, we propose Alphanumeric VGG net for Arabic handwritten alphanumeric character recognition. Alphanumeric VGG net is constructed by thirteen convolutional layers, two max-pooling layers, and three fully-connected layers. We have achieved very promising results, with a validation accuracy of 99.66% for the ADBase database and 97.32% for the HACDB database. (Abstract)

What Is a Large Language Model, the Tech Behind ChatGPT?. blog.dataiku.com/large-language-model-chatgpt. This is a tutorial posted by Dataiku, a global AI service group, which may approach a succinct, general explanation. Sections such as A Large Language Model is a Type of Neural Network, An LLM uses a Transformer Architecture, An LLM Builds Itself, and LLMs Produce Text that Sounds Right but Cannot Know that it is Right recite how these databases are composed. But one wonders who chooses the book, article, website info already on the Internet, what is the basis guide, for what certain application, and so on. Is it a personal eLibrarian, but one that can’t be trusted. At my wife’s Baystate Medical Center library in the 20210s I would see physicians search for relevant subject information – what is the difference, how do these “LLMs” know better?

Ornes, Stephen. Researchers Turn to Deep Learning to Decode Protein Structures. PNAS. 119/10, 2022. We note this science report to highlight now the growing broad avail of frontier neural net capabilities that are serving to revolutionize biochemical research and knowledge.

AlphaFold (DeepMind) uses AI to predict the shapes of proteins; structural biologists are using the program to deepen our understanding of the big molecules. This image shows AlphaFold's predicted structure (in magenta) of a glycoprotein found on the surface of a T cell. (1) The revolution in structural biology isn’t attributable to AI alone; the algorithms have to train on big datasets of high-resolution structures generated by technologies such as X-ray crystallography, NMR spectroscopy or cryogenic electron microscopy, which produced the above image of a protein complex called β-galactosidase. (3)

In the future, researchers see a role for deep learning not only in understanding a protein’s shape but also how it interacts within a living system. Deep learning models may predict not only the sequence of amino acids that would produce the needed shape, but also how they’ll behave — and interact with other molecules in their biological neighborhood — once they’re in place. (4)

Previous   1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9  Next