|
II. Pedia Sapiens: A Planetary Progeny Comes to Her/His Own Actual Factual Twintelligent Knowledge1. Earthificial Cumulative Cognizance: AI Large Language Models Learn Much Like a Child Ornes, Stephen. Researchers Turn to Deep Learning to Decode Protein Structures. PNAS. 119/10, 2022. We note this science report to highlight now the growing broad avail of frontier neural net capabilities that are serving to revolutionize biochemical research and knowledge. AlphaFold (DeepMind) uses AI to predict the shapes of proteins; structural biologists are using the program to deepen our understanding of the big molecules. This image shows AlphaFold's predicted structure (in magenta) of a glycoprotein found on the surface of a T cell. (1) The revolution in structural biology isn’t attributable to AI alone; the algorithms have to train on big datasets of high-resolution structures generated by technologies such as X-ray crystallography, NMR spectroscopy or cryogenic electron microscopy, which produced the above image of a protein complex called β-galactosidase. (3) Palazzi, Maria, et al. Online Division of Labour: Emergent Structures in Open Source Software. arXiv:1903.03375. Internet Interdisciplinary Institute, Open University of Catalonia computer theorists report that even group-wide developments of computational codes can be seen to take on and follow a common course as all other organic assemblies. A further implication, we add, would be another perception that cosmic evolutionary nature draws upon and repeat this same complex adaptive systems generative program at each and every instance. See also a cited reference Multi-scale Structure and Geographic Drivers of Cross-infection within Marine Bacteria and Phages in the ISME Journal (7/520, 2013) which describes a similar pattern for microbes. The development Open Source Software fundamentally depends on the participation and commitment of volunteer developers to progress. Several works have presented strategies to increase the on-boarding and engagement of new contributors, but little is known on how these diverse groups of developers self-organise to work together. To understand this, one must consider that, on one hand, platforms like GitHub provide a virtually unlimited development framework: any number of actors can potentially join to contribute in a decentralised, distributed, remote, and asynchronous manner. On the other, however, it seems reasonable that some sort of hierarchy and division of labour must be in place to meet human biological and cognitive limits, and also to achieve some level of efficiency. Pandey, Lalit, et al. Parallel development of object recognition in newborn chicks and deep neural networks. PLoS Computational Biology. December, 2024. Indiana University informatics researchers including Justin and Samantha Wood describe a clear correspondence between these title phases of cognitive performance by way of a novel usage of digital twins and AI learning methods. As a result, a continuity can be traced between these computational and personal occasions. In regard, here is one more instance where parallels can be drawn between AI procedures and young organisms (chicks and children). See also Parallel development of social behavior in biological and artificial fish in Nature Communications (15/1061, 2024) by this group. A further notice would then be how nature consistently uses the same pattern and process over and over everywhere. How do newborns learn to see? We propose that visual systems are space-time fitters, meaning that visual development can be understood as a blind fitting process (akin to evolution) which gradually adapts to the spatiotemporal environments. To test whether space-time fitting is a viable theory, we performed parallel controlled-rearing experiments on newborn chicks and deep neural networks (DNNs), including CNNs and transformers. When DNNs received the same training data as chicks, the models developed common object recognition skills as chicks. We argue that space-time fitters can serve as scientific models of newborn visual systems. (Excerpt) Pantcheva, Marina. How do LLMs and humans differ in the way they learn and use language.. rws.com/blog/large-language-models-humans.. A Senior Group Manager at RWS (see below) with a PhD in Theoretical Linguistics addresses this aspect with a list of several ways by which youngsters become talkative and informed. She then makes note of a general affinity between these personal learning methods and the algorithmic, iterative processes that form LLMs content and capabilities. The question of how children learn language is central to modern linguistics. Numerous contributions have sought to explain this process, here are a few: Park, Sang Eon, et al. Quasi Anomalous Knowledge: Searching for New Physics with Embedded Knowledge. arXiv:2011.03550. This entry by MIT nuclear physicists is an example of how neural net machine methods can advance sub-atomic particle research. Discoveries of new phenomena often involve a dedicated search for a hypothetical physics signature. Recently, novel deep learning techniques have emerged to detect anomalies but need more precision. Here we present a new strategy dubbed Quasi Anomalous Knowledge (QUAK) which can capture some of the salient features of physics signatures, allowing for the recovery of sensitivity even when signals vary. In this paper, we apply QUAK to anomaly detection of new physics events at the CERN Large Hadron Collider. Pedreschi, Dino, et al. Social AI and the Challenges of the Human-AI Ecosystem. arXiv:2306.13723. This is a significant contribution by sixteen senior scholars posted in Italy, the USA, Sweden, Austria, New Zealand, Greece, the UK and Germany including Albert-Laszlo Barabasi, Alex Pentland, and Alessandro Vespignani as an initial effort toward a working, practical integration of AI capabilities by way of 21st nonlinear sciences with stronger human intervention and guidance, and a dedicated program toward societal betterment. The rise of large-scale socio-technical systems in which humans interact with artificial intelligences enables collective phenomena and tipping points, with unexpected, unintended consequences. In a positive way, we may foster the "wisdom of crowds" and beneficial effects to face social and environmental challenges. In order to understand the impact of AI on socio-technical systems and design better AIs that team with humans,some we consider and scope out some early lineaments and case studies of Social AI at the intersection of Complex Systems, Network Science and AI. (Excerpt) Pedreschi, Dino, et al. Social AI and the Challenges of the Human-AI Ecosystem. arXiv:2306.13723. Sixteen authorities from Italy, Greece, Sweden, New Zealand, Germany, the UK and USA including Albert Barabasi, Sandy Pentland and Alessandro Vespignani post an urgent call for a thorough effort to rein in and get in front of this sudden computational prowess that is bursting upon us. But as Eric Schmidt said on TV in August, the problem is that we lack any philosophic basis as a guide to what is going on. As a main premise of this website, we could suggest the phenomenal semblance of an emergent global sapiensphere brain and its own accumulated knowledge. Large-scale socio-technical systems in which humans interact with artificial intelligence (AI) often leads to social phenomena and tipping points with unexpected, and unintended consequences. As a positive benefit, we may learn how to foster the "wisdom of crowds" and collective actions to face public and environmental challenges. In order to understand these effects and issues, next-generation AIs that team with humans to help overcome problems rather than exacerbate, we propose a Foundations of Social AI project that joins Complex Systems, Network Science and AI. In this perspective paper, we discuss relevant questions, outline technical and scientific challenges and suggest research agendas. (Abstract) Pfau, David, et al. Accurate computation of quantum excited states with neural networks. Science. Vol. 385/Iss. 6711, 2024. We cite this paper by Google DeepMind, London computational scientists as an example of how AI neural net procedures are being readily applied to quantum phenomena, which in turn implies that this fundamental realm has an innate, analytic affinity with cerebral structures and facilities. See also Understanding quantum machine learning also requires rethinking generalization by Elies Gil-Fuster, et al in Nature Communications (15/2277, 2024) for another instance. xcited states are important in many areas of physics and chemistry; however, scalable, accurate, and robust calculations of their properties from first principles remain al theoretical challenge. Recent advances in computing molecular systems driven by deep learning show much promise. Pfau et al. present a parameter-free mathematics by directly generalizing variational quantum Monte Carlo to their ground states. The proposed method achieves accurate excited-state calculations on a number of atoms and molecule, and can be applied to various quantum systems. (Editor Summary) Ruggeri, Azzurra, et al. Preschoolers search longer when there is more information to be gained. Developmental Science. 27/1, 2024. Senior psychologists AR, MPI Human Development, Oana Stanciu, Central European University, Madeline Pelz, MIT, Alison Gopnik, UC Berkeley and Eric Schulz, MPI Biological Cybernetics provide new insights into how children proactively seek and acquire knowledge and then recommend that the process would serve Large Language Models if it was written into its algorithms What drives children to explore and learn when external rewards are uncertain or absent? We tested whether information gain itself acts as an internal reward and suffices to motivate children's actions. We measured 24–56-month-olds' behavior in a game where they had to search for an object with uncertainty about which specific object was hidden. We found that children were more persistent in their search when there was higher ambiguity and more information to be gained. Our results highlight the importance of artificial intelligence research to invest in curiosity-driven algorithms. (Abstract) Russell, Stuart. Human Compatible: Artificial Intelligence and the Problem of Control. New York: Penguin, 2020. A timely volume by the senior UC Berkeley computer scientist authority which is a basic guide for this disparate, frontier field. See his latest article If We Succeed in Daedulus for April 2022, along with a 2019 book and current articles by Melanie Mitchell. Superhuman artificial intelligence is an tidal wave that threatens not just jobs and human relationships, but civilization itself. AI researcher Stuart Russell argues that this scenario can be avoided, but only if we rethink AI from the ground up. Russell begins by exploring the idea of intelligence in humans and in machines. He describes the near-term benefits we can expect, from intelligent personal assistants to accelerated scientific research. He also spells out the ways humans are already finding to misuse AI, from lethal autonomous weapons to viral sabotage. Russell suggests that we can rebuild AI on a new foundation, according to which machines are designed to be inherently uncertain about the human preferences they are required to satisfy. (Publisher) Schmidhuber, Jurgen. Deep Learning in Neural Networks: An Overview. Neural Networks. 61/2, 2015. A technical tutorial by the University of Lugano, Switzerland expert upon advances in artificial or machine learning techniques, based on how our own brains think. Sophisticated algorithms, multiple processing layers with complex structures, assignment paths, non-linear transformations, and so on are at work as they refer new experiences to prior representations for comparison. See also, for example, Semantics, Representations and Grammars for Deep Learning by David Balduzzi at arXiv:1509.08627. Our interest recalls recent proposals by Richard Watson, Eors Szathamary, et al to appreciate life’s evolution as quite akin to a neural net, connectionist learning process. Schneider, Susan. Artificial You: AI and the Future of Your Mind. Princeton: Princeton University Press, 2019. The NASA/Baruch Blumberg Chair at the Library of Congress and cultural communicator provides an accessible, perceptive survey of these diverse algorithmic augmentations as they rush in to reinvent, empower and maybe imperil persons and societies. Of especial interest is the chapter A Universe of Singularities in a Postbiological Cosmos, whence it is assumed that something like the possible transfer (take over) by degrees from human beings (cyborgian) to myriad technological devices (Computocene) phase will have occurred by the billions across the galaxies. It is then contended that this occasion need be factored into exolife searches.
Previous 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 Next [More Pages]
|
||||||||||||||||||||||||||||||||||||||||||||||
HOME |
TABLE OF CONTENTS |
Introduction |
GENESIS VISION |
LEARNING PLANET |
ORGANIC UNIVERSE |