(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

II. Pedia Sapiens: A Planetary Progeny Comes to Her/His Own Twintelligent Gaiable Knowledge

1. Earthificial Cumulative Cognizance: AI Large Language Models Learn Much Like a Child

Stevenson, Claire, et al. Do large language models solve verbal analogies like children do?. arXiv:2310.20384. University of Amsterdam psychologists including Ekaterina Shutova cite another present recognition of a basic correspondence, in this title case, of how youngsters draw on commonalities and associations between items or situations and what it seems these AI chatBot procedures arealso trying to do.


Analogy-making lies at the heart of human cognition. Adults solve analogies such as horse to stable and chicken to coop. In contrast, children use association, and answer egg. This paper investigates whether large language models (LLMs) can solve verbal analogies in A:B::C form, similar to what children do. We use analogies from an online learning environment, where 14,002 7-12 year-olds from the Netherlands solved 622 analogies in Dutch. We conclude that the LLMs we tested indeed tend to solve verbal analogies by association like children do. (Excerpt)

An important take-away from our study is that LLMs may solve analogies as well as 11 year-olds, but to ascertain whether this reasoning is emerging in these systems we need to know the mechanisms by which they obtain these comparisons. Our findings point towards associative processes in play, perhaps similar to those in children. (11)

Strachan, James, et al. Testing theory of mind in large language models and humans. Nature Human Behaviour. May, 2024. Into 2024, twelve computational neuroscientists posted in Germany, Italy, the UK and USA can begin to notice basic affinities between our own cerebral cognition and perceptive capabilities in these nascent cyberspace faculties. See also The Platonic Representation Hypothesis by Minyoung Huh, et al. arXiv:2405.07987 and Predicting the next sentence (not word) in large language models by Shaoyun Yu, et al in Science Advancesfor May 2024. Altogether a viable sense of a global brain as it envelopes the biosphere becomes evident. As these many articles contend, for better or worse depending on how well we might understand and moderate.

At the core of what defines us as humans is the concept of theory of mind: the ability to be aware of other people’s mental states. The development of large language models (LLMs) such as ChatGPT has led to the possibility that they exhibit behaviour similar to our theory of mind tasks. Here we compare human and LLM performance from understanding false beliefs to interpreting indirect requests and recognizing irony. We found that GPT-4 models performed at human levels for indirect requests, false beliefs and misdirection, but struggled with faux pas. These findings show that LLMs are consistent with mentalistic inference in humans and highlight the need for testing to ensure valid comparisons between human and artificial intelligences. (Abstract)

As artificial intelligence (AI) continues to evolve, it also becomes increasingly important to heed calls for open science to these models. Direct access to the parameters, data and documentation used to construct models can allow for targeted probing and experimentation into the key parameters affecting social reasoning, informed by and building on comparisons with human data. As such, open models can not only serve to accelerate the development of future AI technologies but also serve as models of human cognition. (7)

Suleyman, Mustafa. The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma. New York: Crown, 2023. As the quotes say, a “life” guard is sounding the alarm that a tsunami is building as a computational, algorithmic, multitudinous prowess based on brains poses to take off on its own. Impressive technologies of (genetic) life and of (artificial) intelligence are described which presage a synthetic revolution frontier whence our human innovations and interventions have the potential to commence a new intentional phase of evolutionary cocreation. So the issue is whether the wave front can pass to our aware Earthropic ethical benefit, or sweep over us. We are approaching a critical threshold in the history of our species. Soon you will live surrounded by AIs which will organize your life, operate your business, and run government services. It will involve DNA printers, quantum computers, autonomous weapons, robot assistants and abundant energy. As co-founder of DeepMind, now part of Google, Mustafa Suleyman has been at the center of this revolution. The coming decade, he argues, will be defined by this wave of powerful, proliferating technologies. As our fragile governments often sleepwalk into disaster, we face unprecedented harms on one side, and the threat of overbearing surveillance on the other. Can we forge a narrow path between catastrophe and dystopia?

Mustafa Suleyman is the CEO of Microsoft AI. Previously he co-founded and was the CEO of Inflection AI, and he also co-founded DeepMind, one of the world's leading AI companies.

Sun, Haiyang, et al.. Brain-like Functional Organization within Large Language Models. arXiv:2410.19542.. In later 2024, ten Northwestern Polytechnical University, China, University of Georgia, USA and Augusta University, GA contribute to the current AI movement to appreciate these vast informational contents as if drawn from a relative global repository, along with efforts to bring their design and occasion in more intentional accord with human cerebral architectures and functions. See also Roles of LLMs in the Overall Mental Architecture by Ron Sun (RPI) at arXiv:2410.20037 for a similar project.

The human brain has long inspired the pursuit of artificial intelligence (AI). Recently, neuroimaging studies provide evidence of alignment between the computational representation of artificial neural networks (ANNs) and the neural responses of the human brain to stimuli, suggesting that ANNs may employ brain-like information processing strategies. In this study, we directly couple sub-groups of artificial neurons with functional brain networks (FBNs) as its organizational structure.. Our findings reveal that LLMs exhibit brain-like functional architecture, with sub-groups of artificial neurons mirroring the organizational patterns of well-established FBNs. Notably, the brain-like functional organization of LLMs evolves with the increased sophistication and capability. (Excerpt)

Taylor, P., et al. The Global Landscape of Cognition: Hierarchical Aggregation as an Organizational Principle of Human Cortical Networks and Functions. Nature Scientific Reports. 5/18112, 2019. As the deep neural network revolution began via theory and neuroimaging, UM Amherst neuroscientists including Hava Siegelmann attest to a nested connectome architecture which then serves cognitive achievements. On page 15, a graphic pyramid rises from a somatosensory, prosodic base through five stages to reason, language, visual concepts. Might one now imagine this scale as a personal ontogeny recap of life’s evolutionary sapient awakening? See Deep Neural Networks Abstract like Humans by Alex Gain and Hava Siegelmann at arXiv:1905.11515 for a 2019 version.

Tibbetts, John. The Frontiers of Artificial Intelligence. BioScience. 68/1, 2018. A science writer provides a good survey of how deep learning AI capabilities are lately being availed to much benefit worldwide in agricultural crop surveys, medical diagnostic image analysis, flora and fauna conservation, and more. Of course we need be wary and careful, but ought to appreciate its many advantages.

Tosato, Tommaso, et al. Lost in Translation: The Algorithmic Gap Between LMs and the Brain\. . . University of Montreal and Strungmann Institute for Neuroscience, Frankfurt researchers propose a series of parsed programs to better cross-align computational text with our intricate vernaculars. See also Building Artificial Intelligence with Creative Agency by Liane Gabora and Joscha Bach at 2407.10978 for a similar endeavor by way of autocatalic networks.

Language Models (LMs) have achieved impressive performance on linguistic tasks, but their relation to human processing remains unclear. This paper examines pros and con between LMs and the brain at different levels to compare their internal efficacy. We discuss how insights from neuroscience such as sparsity, modularity, internal states, and interactive learning can inform the development of more biologically plausible language models. The role of scaling laws is seen as an analogous way to bridge these loquacious systems. By developing LMs that more closely align with brain function, we aim to advance both artificial intelligence and our understanding of human cognition. (Abstract)

Tsvetkova, Milena, et al.. A New Sociology of Humans and Machines. Nature Human Behaviour. 8/1864, 2024. London School of Economics and Political Science, University College Dublin, New Jersey Institute of Technology and MPI Human Development systems sociologist post a comprehensive tutorial along with pathways going forward so to get in front an inevitable hybrid reality. In regard, this currency well signifies the worldwise manifest ascent of our dual Earthuman futurity.

From fake social media accounts and generative artificial intelligence to trading algorithms, chatbots are permeating our communication channels, social interactions, economic business and transportation arteries. Networks of multiple interdependent humans and intelligent machines constitute complex media for which the collective outcomes cannot be deduced in advance. We review recent research and identify dynamic patterns in competition, coordination, cooperation, contagion and decision-making. Researchers need apply complex system methods; engineers design AI for human–machine and machine–machine symbiosis; and regulators should govern and guide their co-development.

Tuckute, Greta, et al. Language in Brains, Minds, and Machines.. Annual Review of Neuroscience.. Volume 47, 2024. MIT neurolinguists including Evelina Fedorenko provide a range of current insights and concerns appreciations as Large Language version come into our knowsphere content. See also Elements of World Knowledge (EWOK): A cognition-inspired framework for evaluating basic world knowledge in language models by this group at arXiv:2405.09605.

It has long been argued that only humans could produce and understand language. But now, for the first time, artificial language models (LMs) achieve this feat. Here we survey the new purchase LMs are providing on the question of how language is implemented in the brain. We summarize evidence that LMs represent linguistic information similarly enough to humans to enable relatively accurate brain encoding and decoding during language processing. Finally, we examine which LM properties—their architecture, task performance, or training—are critical for capturing human neural responses to language.

Tzachor, Asaf, et al. Artificial Intelligence in a Crisis Needs Ethics with Urgency. Nature Machine Intelligence. 2/365, 2020. Cambridge University, Center for the Study of Existential Risk and Center for the Future of Intelligence scholars weigh in by saying that while the COVID pandemic can be well studied by AI to better analyze spreadings, track movements and so on, its use needs to be scrutinized and guided by respectful methods.

Artificial intelligence tools can help save lives in a pandemic. However, the need to implement technological solutions rapidly leads to challenging ethical issues. We need new approaches for ethics with urgency, to ensure AI can be safely and beneficially used in the COVID-19 response and beyond. (Abstract)

Vaidya, Satyarth, et al. Brief Review of Computational Intelligence Algorithms. arXiv:1901.00983. Birla Institute of Technology and Science, Pilani Campus, Dubai computer scientists survey a wide array of brain-based and indigenous algorithmic methods, along with showing how they are finding service in far afield domains from geology to cerebral phenomena.

Computational Intelligence algorithms have been found to deliver near optimal solutions. In this paper we propose a new hierarchy which classifies algorithms based on their sources of inspiration. The algorithms have two broad domains namely modeling of human mind and nature inspired intelligence. Algorithms of Modeling of human mind take their motivation from the manner in which humans perceive and deal with information. Similarly algorithms of nature inspired intelligence are based on ordinary phenomenon occurring in nature. The latter has further been broken into swarm intelligence, geosciences and artificial immune system. (Abstract)

VanRullen, Rufin and Ryota Kanai. Deep Learning and the Global Workspace Theory. Trends in Neuroscience. June, 2021. CNRS France and University of Toulouse neuroscholars propose to avail this vital cognitive feature from our cerebral facility to achieve a more brain based AI. In a general sense, as below, it is a space place where an active array of data, thoughts, facts and so on are gathered so to peruse and consider.

Recent advances in deep learning have allowed artificial intelligence (AI) to reach near human-level performance in many sensory, perceptual, linguistic, and cognitive tasks. There is a growing need, however, for novel, brain-inspired cognitive architectures. The Global Workspace Theory (GWT) refers to a large-scale system which integrates and distributes information among networks of specialized modules to create higher-level forms of cognition and awareness. Accordingly, we propose that implementations of this theory ought to be availed for AI using deep-learning techniques. (Abstract excerpt)

Global workspace theory (GWT) is a cognitive architecture that is meant to account qualitatively for a large set of matched pairs of conscious and unconscious processes. GWT resembles the concept of working memory, and corresponds to the inner domain of inner speech and visual imagery in which we carry on the narrative of our lives. (Wikipedia)

[Prev Pages]   Previous   | 6 | 7 | 8 | 9 | 10 | 11 | 12  Next