(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

Recent Additions: New and Updated Entries in the Past 60 Days
Displaying entries 1 through 15 of 96 found.


Planatural Genesis: A Phenomenal, PhiloSophia, Propaedutic, TwinKinder, PersonVerse Endeavor

The Genesis Vision > News

Bialek, William. Moving boundaries: An appreciation of John Hopfield.. arXiv:2412.18030.. As a junior colleague at Princeton with the new physics laureate for his initial 1982 conception of neural networks, this commentary first reminisces and then goes on to 2025 physics frontiers as it proceeds to realize an actual cognitive vitality

The 2024 Nobel Prize in Physics was awarded to John Hopfield and Geoffrey Hinton, "for foundational discoveries and inventions that enable machine learning with artificial neural networks." As noted by the Nobel committee, their work moved the boundaries of physics. This is a brief reflection on Hopfield's work, its implications for the emergence of biological physics as a part of physics, the path from his early papers to the modern revolution in artificial intelligence, and prospects for the future. (Abstract)

What is physics? The central idea is that the world is understandable, that you should be able to take anything apart, understand the relationships between its constituents, do experiments, and on that basis be able to develop a quantitative understanding of its behavior. Physics was a point of view that the world around us is, with effort, ingenuity, and adequate resources, understandable in a predictive in quantitative fashion. (10)

The Genesis Vision > News

Kelso, Scott and David Engstrom. The Squiggle Sense: The Complementary Nature and the Metastable Brain~Mind. Switzerland: Springer,, 2024. Some 18 years after their first The Complementary Nature edition, the Florida Atlantic University veteran scholars draw on many intervening advances in complex system studies to presently embellish and affirm this innate, whole scale, coincidence of opposites. Their theoretic and empirical basis is now identified as Coordination Dynamics several references below). Typical chapters such as Coordination Dynamics and the Complementary Code, Pattern Dynamics and Dynamic Patterns, Individual and Collective, Synchronization and Syncopation and Polarization and Reconciliation consider scientific and philosophical aspects and attributes.

In actual regard, the authors have achieved a strong recognition of a universal self-organized segregate/integrate criticality which is desperately needed today. For example, see Democracy and Wisdom by Kelso in Portugali 2023 (search). For the record, I received an email from David E. saying that my Natural Genesis review of their 2006 book was the best appreciation of their work that they had seen. Here next are some supporting articles.

Hancock, Fran, et al. Metastability Demystified—The Foundational Past, the Pragmatic Present, and the Potential Future. preprints202307.1445.v1.pdf.

Kelso, Scott. The Haken–Kelso–Bunz model: from matter to movement to mind. Biological Cybernetics. 115/305, 2021.

Zhang, Mengsen, et al. Topological portraits of multiscale coordination dynamics. Journal of Neuroscience Methods. 339/108672, 2020.

Tognoli, Emmanuelle, et al. Coordination dynamics: A foundation for understanding social behavior. Frontiers in Human Neuroscience. Vol. 14/Art. 317, 2020.

Fields, Chris and Michael Levin. On the complementarity between objects and processes. Physics of Life Reviews. January 2025.

Our public penchant for polar either/or thinking is a major block to human development and understanding. In this book Kelso & Engstrøm offer a whole new way of looking at the world which draws nature’s many complementary contraries and the new science of Coordination Dynamics. In fifty brief, topical chapters, the human brain~mind is seen to give rise to a sentient faculty called the squiggle sense whereby opposites are perceived as coexisting, metastable, reciprocal tendencies. (Book)

Conclusion Hopefully, these Squiggle frames will help you engage your squiggle sense and take the complementary nature to heart. Their gist and root are a Nature, including human nature, which is essentially complementary and grounded in a Coordination Dynamics which arises from and operates in this metastablian mode. (Last chapter)

Pedia Sapiens: A Planetary Progeny Comes to Her/His Own Actual Factual Twintelligent Knowledge

A Learning Planet > The Spiral of Science

, . Zheng, Yizhen, et al. Large language models for scientific discovery in molecular property prediction. Nature Machine Intelligence. February 23, 2025. Nature Machine Intelligence. February 23, 2025. We cite this entry by seven Monash University computer scientists led by Geoffrey Webb to convey how researchers are assimilating AI facilities so as to achieve an effective human foresight and machine learning symbiosis. In addition to biochemistry, further usages for physical chemistry, quantum mechanics, physiology, and biophysics are illustrated. See also Generative AI as a tool to accelerate the field of ecology by Kasim Rafiq, et al in Nature Ecology & EvolutionNature Computational Science (February 2025) for similar syntheses.

Large language models (LLMs) are AI systems which contain vast knowledge in the form of natural language. Although LLMs have some usage, their potential for scientific discovery remains as yet unexplored. In this work, we introduce LLM4SD as custom designed for molecular property prediction by synthesizing information from literature and data. By using these features with interpretable models, LLM4SD can achieve sensible learnings by which to transform biomolecules into vital feature vectors. Our proven results show can foster across a range of benchmark properties for predicting molecular properties. (Excerpt)

A Learning Planet > The Spiral of Science

Barman, Kristian, et al. Large Physics Models: Towards a collaborative approach with Large Language Models and Foundation Models.. arXiv:2501.05382. We cite this entry by twenty two investigators in the Netherlands, Spain, Germany, Switzerland and Austria because it describes a science spiral practice that blends a title array AI neural net procedures. In regard, into the 2020s global research projects could then be seen as more and more taking off on their own course. See also Automating the Search for Artificial Life with Foundation Models by Kumar, Akarsh Kumar, et al at arXiv:2412.17799.

This paper seeks to scope out the development and evaluation of physics-specific large-scale foundation AI models, which we call Large Physics Models (LPMs).. LPMs can function independently or incorporate specialized tools, including symbolic reasoning modules, analyse specific experimental data and synthesizing theories and scientific literature. In regard, we identify three key pillars: Development, Evaluation, and Philosophical Reflection. Finally, Philosophical Reflection views the broader implications of LLMs in physics and what novel collaboration dynamics might arise in research. (Excerpt)

A Learning Planet > The Spiral of Science

Rong, Guoyang, et al.. 40 Years of Interdisciplinary Research: Phases, Origins, and Key Turning Points.. arXiv:2501.05001. Wuhan University, National University of Singapore and Technische University of Berlin scholars including Thorsten Koch conduct a review of the past four decades of scientific studies by which to perceive discernable advances with stages and trends. In regard, this retrospective can illuminate how our composite Earthumanity is actually proceeding to explore, test and learn by her/his sapiensphere own.
.

This study examines the historical evolution of interdisciplinary research (IDR) over a 40 year span. We review three distinct phases based on these trends: Period I (1981-2002), marked by sporadic and limited interdisciplinary activity; Period II (2003-2016), an emergence of large-scale IDR with with breakthroughs in and medical technology; and Period III (2017-present), where IDR became a widely adopted research paradigm. (Excerpt)

A Learning Planet > Mindkind Knowledge > deep

Apidianaki, Marianna, et al.. Language Learning, Representation, and Processing in Humans and Machines. .. Computational Linguistics.. 50/4, 2025. University of Pennsylvania, Aix Marseille University and University of Stuttgart AI scholars introduce a special issue on this topic whereby practitioners compare how human persons and large language models gain their knowledge content and pursue and express its productive usage. Into this year, the tacit theme is now to find to ways align the two modes for mutual benefit. Some typical detailed entries are: Usage-based Grammar Induction from Minimal Cognitive Principles, Can Language Models Handle Recursively Nested Grammatical Structures? and Humans Learn Language from Situated Communicative Interactions.

Large Language Models (LLMs) and human beings acquire knowledge about language without direct supervision. LLMs do so by specific training objectives, while humans rely on sensory experience and social interaction. Yet, the differences in the way that language is processed by machines and humans in terms of learning mechanisms, data used, and different modalities make a direct translation difficult. The aim of this edited volume is to be a forum of exchange and debate along this line of research with contributions that seek similarities and differences between humans and LLMs.

A Learning Planet > Mindkind Knowledge > deep

Eacersall, Douglas, et al. The ETHICAL Framework for Responsible Generative AI Research Use. arXiv:2501.09021.. Fifteen cultural scholars mainly in Australia along with Canada, Malayasia and the Philippines post a thorough cast of behavioral standards and regulations so to insure at this early stage that trustworthy results are achieved.


The rapid adoption of generative artificial intelligence (GenAI) presents both many opportunities and ethical issues that should be carefully navigated. This paper develops the ETHICAL guide as a practical guide for responsible GenAI use by way of seven key principles: Examine policies and guidelines, Think about social impacts, Harness understanding of the technology, Indicate use, Critically engage with outputs, Access secure versions, and Look at user agreements. (Excerpt)

The ETHICAL Framework presented in this article stands as a foundational resource for researchers navigating the ethical challenges associated with GenAI. While some guidelines exist, this framework progresses beyond awareness to practical action. The ETHICAL Framework explicitly equips researchers with actionable principles, providing clear guidance on ethical GenAI use in research, thereby supporting both integrity and impact. (17)

A Learning Planet > Mindkind Knowledge > deep

Gifford, Alessandro, et al. The Algonauts Project 2025 Challenge.. arXiv:2501.00504. Freie Universität Berlin, Goethe Universität Frankfurt, Université de Montréal, Montréal and MIT neuroscientists including Radoslaw Cichy describe an array of innovate AI adventures as a way to better understand how brains perform and may interface with computational media. An example would Automating the Search for Artificial Life with Foundation Models at pub.sakana.ai/asal, second quote.


There is growing symbiosis between artificial and biological intelligence sciences: neural principles inspire new intelligent machines, which are in turn used to advance our theoretical understanding of the brain. Here we introduce the 2025 edition: How the Human Brain Makes Sense of Multimodal Movies. In collaboration with the Courtois Project on Neuronal Modelling, our aim is to bring forth a new generation of brain encoding models that generalize well by training them on large datasets of fMRI responses. (Excerpt)

Artificial Life (ALife) has not yet integrated FMs which presents an opportunity to move beyond manual design and trial-and-error to discover of lifelike simulations. The proposed approach, called Automated Search for Artificial Life (ASAL), (1) finds simulations that produce target phenomena, (2) that generate temporally open-ended novelty, and (3) illuminates an entire space of interestingly diverse versions. A major result is finding novel Lenia and Boids lifeforms, as well as open-ended cellular automata. (Sanaka MIT)

A foundation model is a deep learning model that is trained on vast datasets so it can be applied across a wide range of use cases. Generative AI applications like Large Language Models are examples. (Wikipedia)

A Learning Planet > Mindkind Knowledge > deep

Johnson, Samuel, et al. Imagining and building wise machines: The centrality of AI metacognition. arXiv:2411.02478.. arXiv:2411.02478.. Eleven senior computer scientists at the University of Waterloo, University of Montreal, Stanford University, Allen Institute for Artificial Intelligence, Santa Fe Institute, MPI Human Development and MPI Intelligent Systems including Yoshua Bengio, Nick Chater and Melanie Mitchell join a current project to get ahead of and rein in this worldwide computational transition. As foundation and large language models, along with agentic behaviors, become understood and availed, it is vital to have a lead segment of informed human management through appropriate prompts, select data resources, proper algorithms and so on. See, for example, Role of the human-in-the-loop in emerging self-driving laboratories for heterogeneous catalysing by Christoph Scheurer and Karsten Reuter in Nature Catalysis (January 2025). As we work through this critical phase, a beneficial balance of people in ethical charge, along with allowing agents to run pattern finding programs, could be a resolve.

While advances in artificial intelligence (AI) have shown to be capable of sophisticated performance on cognitive tasks, AI systems struggle in critical ways: unpredictable and novel environments (robustness), their reasoning (explainability), communication and commitment (cooperation), and harmful risks (safety). We argue that these issues stem from one basic lapse: AI systems lack wisdom. Drawing from philosophic mores, we define wisdom as the ability to navigate ambiguous, novel, chaotic problems through metacognitive strategies. Prioritizing metacognition in AI research will lead to systems that act not only intelligently but also wisely in complex, real-world situations. (Excerpts)

MPI Intelligent Systems Our goal is to understand the principles of Perception, Action and Learning that interact with complex environments. The Institute studies these aspects in biological, computational, hybrid, and material systems from nano to macro scales. The Physics for Inference and Optimization Group focuses on relations between the microscopic and macroscopic complex interactive networks by algorithms based on statistical physics.

A Learning Planet > Mindkind Knowledge > deep

Kumar, Akarsh, et al. Automating the Search for Artificial Life with Foundation Models. .. arXiv:2412.17799. MIT, Sakana AI, OpenA, and Swiss AI Lab IDSIA computational imagineers describe their frontier excursions as novel approaches to juice the A Life endeavor to see how it can respectfully and beneficially open frontier pathways. See also Automating the Search for Artificial Life with Foundation Models at pub.sakana.ai/asal for a companion paper.

With the recent Nobel Prize awarded for radical advances in protein discovery, foundation models (FMs) for exploring large combinatorial spaces promise to revolutionize many scientific fields. This paper presents a successful realization using vision-language FMs called Automated Search for Artificial Life (ASAL), finds generalities across a diverse range of ALife substrates including Boids, Particle Life, Game of Life, Lenia, and Neural Cellular Automata. This new paradigm promises to accelerate ALife research beyond what is possible through human ingenuity alone. (Excerpt)

A foundation model is a deep machine learning method trained on vast datasets so it can be applied across a wide range of use cases. Early examples are language models (LMs) like OpenAI's GPT. Foundation models are also being developed for fields like astronomy, radiology, genomics, mathematics, and chemistry.

A Learning Planet > Mindkind Knowledge > deep

Masry, Ahmed, et al. AlignVLM: Bridging Vision and Language Latent Spaces for Multimodal Understanding. arXiv:2502.01341.. Sixteen AI experts at York University, McGill University, University of Waterloo and University of British Columbia including Yoshua Bengio propose and describe innovative computational ways to combine both words and pictures so to achieve more effective, enlightened results. And we note that would engage both brain hemispheres in meaningful unison.

Aligning visual features with language embeddings is a key challenge in vision-language models (VLMs). In this work, we propose a novel vision-text alignment method, AlignVLM, that maps visual features to a weighted average of LLM text embeddings. Our approach ensures that visual features are mapped to regions of the space that the LLM can effectively interpret. AlignVLM achieves state-of-the-art performance and improved vision-text feature integration. (Excerpt)

A Learning Planet > Mindkind Knowledge > deep

Pandey, Lalit, et al. Parallel development of object recognition in newborn chicks and deep neural networks. PLoS Computational Biology. December, 2024. Indiana University informatics researchers including Justin and Samantha Wood describe a clear correspondence between these title phases of cognitive performance by way of a novel usage of digital twins and AI learning methods. As a result, a continuity can be traced between these computational and personal occasions. In regard, here is one more instance where parallels can be drawn between AI procedures and young organisms (chicks and children). See also Parallel development of social behavior in biological and artificial fish in Nature Communications (15/1061, 2024) by this group. A further notice would then be how nature consistently uses the same pattern and process over and over everywhere.

How do newborns learn to see? We propose that visual systems are space-time fitters, meaning that visual development can be understood as a blind fitting process (akin to evolution) which gradually adapts to the spatiotemporal environments. To test whether space-time fitting is a viable theory, we performed parallel controlled-rearing experiments on newborn chicks and deep neural networks (DNNs), including CNNs and transformers. When DNNs received the same training data as chicks, the models developed common object recognition skills as chicks. We argue that space-time fitters can serve as scientific models of newborn visual systems. (Excerpt)

We present evidence for parallel development of object recognition in newborn chicks and deep neural networks. Like chicks, the models learned invariant object features from visual experiences in impoverished environments, permitting recognition of familiar objects across large, novel, and complex changes in the object’s appearance. This digital twin approach extends the reverse-engineering framework pioneered in computational neuroscience to the study of newborn vision, supporting the broader goal of building unified models of the learning machinery in brains. (26)

One of the unsolved mysteries in science concerns the origins of intelligence. By linking psychology to artificial intelligence, we aim to reverse engineer the origins of intelligence and build machines that learn like newborn animals. I am interested in a wide range of questions about the origins and nature of intelligence. I have studied the psychological abilities of diverse human adults, toddlers, infants, chimpanzees, wild monkeys, and newborn chicks. (J. Wood website)

A Learning Planet > Mindkind Knowledge > deep

Scheurer, Christoph and Karsten Reuter. Role of the human-in-the-loop in emerging self-driving laboratories for heterogeneous catalysis.. Nature Catalysis. January 29, 2025. We cite this entry by Max Planck Institute researchers as an example of new realizations that AI machinations ought not to be turned loose to run on their own. It After some thirty months of ChatGPTs, a constant reciprocity of AI inputs and human management is seen to achieve a best balance in ethical co-generative applications

Self-driving laboratories (SDLs) represent a convergence of machine learning with laboratory automation which operate in active learning situations as algorithms plan experiments that are carried out by automated (robotic) modules. Here we argue against humans totally out of the loop. We instead conclude that crucial advances will come from fast proxy experiments, existing apparatus with real persons making continuous decision-making. (Excerpt)

A Learning Planet > Mindkind Knowledge > CI

Riedl, Christoph and David De Cremer, eds. The potential and challenges of AI for collective intelligence. Collective Intelligence. February, 2025. Twenty-two practitioners in the UK and USA such as Gina Lucarelli, John Cartlidge and Joan Condell describe how their projects such as PSi: A scalable approach to community-led public decision-making; The AI4CI loop and Perspectives and on the UNDP Accelerator Network are integrating and taking advantage of these novel occasions.

Tackling large scale problems like climate change and the Sustainable Development Goals, requires taking a collective approach. Artificial Intelligence (AI) offers tremendous potential to enhance collective intelligence, both as an actor that contributes to the solution directly, and as a tool and mentor that helps coordinate human collective intelligence. Collective Intelligence invited experts and practitioners to highlight key challenges and explain how they employ AI to advance novel solutions.

Ecosmos: A Revolutionary Fertile, Habitable, Solar-Bioplanet, Incubator Lifescape

Animate Cosmos > Organic

Galvin, Daniel, et al. Abundant ammonia and nitrogen-rich soluble organic matter in samples from asteroid Bennu. Nature Astronomy.. January 29, 2025. In an article that merited news coverage, NASA Goddard astroscientists describe a unique opportunity to study meteoric surface compositions which are not contaminated by impacting our planet. These carbonaceous materials were retrieved by the Origins, Spectral Interpretation, Resource Identification, and Security–Regolith Explorer mission (NASA website). See also An evaporite sequence from ancient brine recorded in Bennu samples by T. J. McCoy et al in Nature (January 29, 2025) for a companion article. Altogether this achievement is being viewed as strong evidence of a complex biochemical, evolutionary course which would well distinguish an innately fertile Ecosmos.

Organic matter in meteorites can reveal clues about early Solar System chemistry and the origin of molecules important to life, but their terrestrial exposure complicates interpretation. However samples returned from the asteroid Bennu by the Origin Explorer mission enabled us to study pristine carbonaceous astromaterial and detect amino acids, formaldehyde, carboxylic acids, polycyclic aromatic hydrocarbons and N-heterocycles (all five nucleobases in DNA and RNA), along with ~10,000 N-bearing chemical species.

Additional analyses of Bennu samples, coupled with laboratory analogue experiments helped us further understand the origin and evolution of prebiotic organic matter and chemical links between volatile-rich asteroids and primitive icy bodies. Similar asteroids could have been a source ol compounds such as ammonia, amino acids, nucleobases, phosphates and other chemical precursors that contributed to the prebiotic inventory that led to the emergence of life on Earth. (8)

1 | 2 | 3 | 4 | 5 | 6 | 7  Next