(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

II. Pedia Sapiens: A Planetary Progeny Comes to Her/His Own Actual Factual Knowledge

1. Earthificial Cumulative Cognizance: AI Large Language Models Learn Much Like a Child

Palazzi, Maria, et al. Online Division of Labour: Emergent Structures in Open Source Software. arXiv:1903.03375. Internet Interdisciplinary Institute, Open University of Catalonia computer theorists report that even group-wide developments of computational codes can be seen to take on and follow a common course as all other organic assemblies. A further implication, we add, would be another perception that cosmic evolutionary nature draws upon and repeat this same complex adaptive systems generative program at each and every instance. See also a cited reference Multi-scale Structure and Geographic Drivers of Cross-infection within Marine Bacteria and Phages in the ISME Journal (7/520, 2013) which describes a similar pattern for microbes.

The development Open Source Software fundamentally depends on the participation and commitment of volunteer developers to progress. Several works have presented strategies to increase the on-boarding and engagement of new contributors, but little is known on how these diverse groups of developers self-organise to work together. To understand this, one must consider that, on one hand, platforms like GitHub provide a virtually unlimited development framework: any number of actors can potentially join to contribute in a decentralised, distributed, remote, and asynchronous manner. On the other, however, it seems reasonable that some sort of hierarchy and division of labour must be in place to meet human biological and cognitive limits, and also to achieve some level of efficiency.

These latter features (hierarchy and division of labour) should translate into recognisable structural arrangements when projects are represented as developer-file bipartite networks. In this paper we analyse a set of popular open source projects from GitHub, placing the accent on three key properties: nestedness, modularity and in-block nestedness -which typify the emergence of heterogeneities among contributors, the emergence of subgroups of developers working on specific subgroups of files, and a mixture of the two previous, respectively. These analyses show that indeed projects evolve into internally organised blocks. (Abstract excerpts)

To answer these questions, we will look at three structural arrangements which have been identified as signatures of self-organisation in both natural and artificial systems: nestedness (i.e. do projects evolve in a way such that the emergence of generalists and specialists is favoured?); modularity (i.e. do OSS projects split in identifiable compartments, thus avoiding Brook’s law despite the addition of contributors? Are these compartments bounded?); and in-block nestedness (i.e. if bio-cognitive limits and division of labour are in place, do the resulting specialised modules self-organise internally?) (2)

Pantcheva, Marina. How do LLMs and humans differ in the way they learn and use language.. rws.com/blog/large-language-models-humans.. A Senior Group Manager at RWS (see below) with a PhD in Theoretical Linguistics addresses this aspect with a list of several ways by which youngsters become talkative and informed. She then makes note of a general affinity between these personal learning methods and the algorithmic, iterative processes that form LLMs content and capabilities.

The question of how children learn language is central to modern linguistics. Numerous contributions have sought to explain this process, here are a few:

Social interactionist theory suggests that feedback and corrections play a pivotal role in language acquisition along with dialogue between the child and the linguistic adults.
Behaviorist theory posits that children learn language by mimicking those around them and positive reinforcement for their endeavors.

Statistical learning theory proposes that children use the natural statistical properties of language to deduce its deep structure such as sound patterns, words, and grammar.
Universal grammar theory argues for the existence of constraints on what human language can look like. In essence, children possess an innate biological component that enables their rapid development of language. (MP)

Genuine Intelligence (GI). Generative AI and Large Language Models are redefining the boundaries of language and content transformation. GI is not just about AI and people working together, it composes a symbiotic blend of AI's computational capacity with human insight and creativity. RWS is a global company based in the UK for transforming content through translation, localization and AI technology blended with human expertise.

Park, Sang Eon, et al. Quasi Anomalous Knowledge: Searching for New Physics with Embedded Knowledge. arXiv:2011.03550. This entry by MIT nuclear physicists is an example of how neural net machine methods can advance sub-atomic particle research.

Discoveries of new phenomena often involve a dedicated search for a hypothetical physics signature. Recently, novel deep learning techniques have emerged to detect anomalies but need more precision. Here we present a new strategy dubbed Quasi Anomalous Knowledge (QUAK) which can capture some of the salient features of physics signatures, allowing for the recovery of sensitivity even when signals vary. In this paper, we apply QUAK to anomaly detection of new physics events at the CERN Large Hadron Collider.

Pedreschi, Dino, et al. Social AI and the Challenges of the Human-AI Ecosystem. arXiv:2306.13723. This is a significant contribution by sixteen senior scholars posted in Italy, the USA, Sweden, Austria, New Zealand, Greece, the UK and Germany including Albert-Laszlo Barabasi, Alex Pentland, and Alessandro Vespignani as an initial effort toward a working, practical integration of AI capabilities by way of 21st nonlinear sciences with stronger human intervention and guidance, and a dedicated program toward societal betterment.

The rise of large-scale socio-technical systems in which humans interact with artificial intelligences enables collective phenomena and tipping points, with unexpected, unintended consequences. In a positive way, we may foster the "wisdom of crowds" and beneficial effects to face social and environmental challenges. In order to understand the impact of AI on socio-technical systems and design better AIs that team with humans,some we consider and scope out some early lineaments and case studies of Social AI at the intersection of Complex Systems, Network Science and AI. (Excerpt)

Social AI is emerging at the crossroads of Complex Systems, Network Science and AI, and poses an array of open scientific and technical challenges. Network phenomena provides us with tools to understand the complexity of social systems; while AI provides us with new technological abilities that, together with norms and policy, may help us steer our social systems towards agreed sustainable development goals. Social AI, as the combination and synthesis of these approaches is a novel way to achieve a conceptual framework to lay out a next-generation AI that transparently serves our humans facilitation so to overcome the problems rather than exacerbate them. (10)

Pedreschi, Dino, et al. Social AI and the Challenges of the Human-AI Ecosystem. arXiv:2306.13723. Sixteen authorities from Italy, Greece, Sweden, New Zealand, Germany, the UK and USA including Albert Barabasi, Sandy Pentland and Alessandro Vespignani post an urgent call for a thorough effort to rein in and get in front of this sudden computational prowess that is bursting upon us. But as Eric Schmidt said on TV in August, the problem is that we lack any philosophic basis as a guide to what is going on. As a main premise of this website, we could suggest the phenomenal semblance of an emergent global sapiensphere brain and its own accumulated knowledge.

Large-scale socio-technical systems in which humans interact with artificial intelligence (AI) often leads to social phenomena and tipping points with unexpected, and unintended consequences. As a positive benefit, we may learn how to foster the "wisdom of crowds" and collective actions to face public and environmental challenges. In order to understand these effects and issues, next-generation AIs that team with humans to help overcome problems rather than exacerbate, we propose a Foundations of Social AI project that joins Complex Systems, Network Science and AI. In this perspective paper, we discuss relevant questions, outline technical and scientific challenges and suggest research agendas. (Abstract)

Ruggeri, Azzurra, et al. Preschoolers search longer when there is more information to be gained. Developmental Science. 27/1, 2024. Senior psychologists AR, MPI Human Development, Oana Stanciu, Central European University, Madeline Pelz, MIT, Alison Gopnik, UC Berkeley and Eric Schulz, MPI Biological Cybernetics provide new insights into how children proactively seek and acquire knowledge and then recommend that the process would serve Large Language Models if it was written into its algorithms

What drives children to explore and learn when external rewards are uncertain or absent? We tested whether information gain itself acts as an internal reward and suffices to motivate children's actions. We measured 24–56-month-olds' behavior in a game where they had to search for an object with uncertainty about which specific object was hidden. We found that children were more persistent in their search when there was higher ambiguity and more information to be gained. Our results highlight the importance of artificial intelligence research to invest in curiosity-driven algorithms. (Abstract)

All in all, these findings consolidate our understandings of children’s motivation to learn and explore, and have strong implications for developmental psychology and artificial intelligence. The results are consistent with a theory of children’s exploration and learning driven by uncertainty reduction. From an artificial intelligence view, they lend further support to the idea that to build computational machines that learn like children, one should build curiosity-based systems and design algorithms motivated by the underlying expected IG (Intelligence gain) of their actions. (6)

Russell, Stuart. Human Compatible: Artificial Intelligence and the Problem of Control. New York: Penguin, 2020. A timely volume by the senior UC Berkeley computer scientist authority which is a basic guide for this disparate, frontier field. See his latest article If We Succeed in Daedulus for April 2022, along with a 2019 book and current articles by Melanie Mitchell.

Superhuman artificial intelligence is an tidal wave that threatens not just jobs and human relationships, but civilization itself. AI researcher Stuart Russell argues that this scenario can be avoided, but only if we rethink AI from the ground up. Russell begins by exploring the idea of intelligence in humans and in machines. He describes the near-term benefits we can expect, from intelligent personal assistants to accelerated scientific research. He also spells out the ways humans are already finding to misuse AI, from lethal autonomous weapons to viral sabotage. Russell suggests that we can rebuild AI on a new foundation, according to which machines are designed to be inherently uncertain about the human preferences they are required to satisfy. (Publisher)

Since its inception, AI has operated within a standard model whereby systems are designed to optimize a fixed, known objective. This model has been increasingly successful. I briefly summarize the state of the art and its likely evolution over the next decade. At the same time, the standard model will become progressively untenable in real-world applications because of the difficulty of specifying objectives completely and correctly. I propose a new model for AI development in which the machine’s uncertainty about the true objective leads to qualitatively new modes of behavior that are more robust, controllable, and deferential. (Article)

Schmidhuber, Jurgen. Deep Learning in Neural Networks: An Overview. Neural Networks. 61/2, 2015. A technical tutorial by the University of Lugano, Switzerland expert upon advances in artificial or machine learning techniques, based on how our own brains think. Sophisticated algorithms, multiple processing layers with complex structures, assignment paths, non-linear transformations, and so on are at work as they refer new experiences to prior representations for comparison. See also, for example, Semantics, Representations and Grammars for Deep Learning by David Balduzzi at arXiv:1509.08627. Our interest recalls recent proposals by Richard Watson, Eors Szathamary, et al to appreciate life’s evolution as quite akin to a neural net, connectionist learning process.

Schneider, Susan. Artificial You: AI and the Future of Your Mind. Princeton: Princeton University Press, 2019. The NASA/Baruch Blumberg Chair at the Library of Congress and cultural communicator provides an accessible, perceptive survey of these diverse algorithmic augmentations as they rush in to reinvent, empower and maybe imperil persons and societies. Of especial interest is the chapter A Universe of Singularities in a Postbiological Cosmos, whence it is assumed that something like the possible transfer (take over) by degrees from human beings (cyborgian) to myriad technological devices (Computocene) phase will have occurred by the billions across the galaxies. It is then contended that this occasion need be factored into exolife searches.

Schuchardt, Jan, et al.. Learning to Evolve. arXiv:1905.03389. Technical University of Munich informatics researchers advance ways to employ evolution-based algorithms which in turn shows how life’s long development can appear as a computational process. From our late vantage, it may seem that a cosmic genesis needs to pass on this genetic-like agency to our own continuance.

Evolution and learning are two of the fundamental mechanisms by which life adapts in order to survive and to transcend limitations. These biological phenomena inspired successful computational methods such as evolutionary algorithms and deep learning. Evolution relies on random mutations and on random genetic recombination. Here we show that learning to evolve, i.e. learning to mutate and recombine better than at random, improves the result of evolution in terms of fitness increase per generation and even in terms of attainable fitness. We use deep reinforcement learning to learn to dynamically adjust the strategy of evolutionary algorithms to varying circumstances. (Abstract)

Schuman, Catherine, et al. A Survey of Neuromorphic Computing and Neural Networks in Hardware. arXiv:1705.06963. Oak Ridge Laboratory and University of Tennessee researchers provide a copious review of progress from years of machine computation to this novel advance to artificially avail the way our own iterative brains so adeptly recognize shapes and patterns. The moniker Neuromorphic means how fast we can say cat or car.

Neuromorphic computing has come to refer to a variety of brain-inspired computers, devices, and models that contrast the pervasive von Neumann computer architecture. This biologically inspired approach has created highly connected synthetic neurons and synapses that can be used to model neuroscience theories as well as solve challenging machine learning problems. The promise of the technology is to create a brain-like ability to learn and adapt, but the technical challenges are significant, starting with an accurate neuroscience model of how the brain works, to finding materials and engineering breakthroughs to build devices to support these models, to creating a programming framework so the systems can learn, to creating applications with brain-like capabilities. In this work, we provide a comprehensive survey of the research and motivations for neuromorphic computing over its history.

Seif, Alireza, et al. Machine Learning the Thermodynamic Arrow of Time. Nature Physics. 17/1, 2022. We cite this entry by University of Maryland physicists including Chris Jarzynski as example of how these 2020s bio-based, neural net techniques which run iterative programmed computations can serve as an advanced spiral stage of worldwise scientific studies. In this case, the old arrow of time problem gains a new depth of understanding which was heretofore inaccessible.

The asymmetry in the flow of events that is expressed as “time’s arrow’” traces back to the second law of thermodynamics. In the microscopic regime, fluctuations prevent us from discerning the direction of time with certainty. Here, we find that a machine learning algorithm trained to infer an actual aim identifies entropy production as the relevant physical quantity in its decision-making process. The algorithm rediscovers the fluctuation theorem as the prime thermodynamic principle. Our results indicate that machine learning methods can be used to study out of equilibrium systems and begin to uncover deep physical principles. (Abstract)

Previous   1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9  Next