(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

II. Pedia Sapiens: A Planetary Progeny Comes to Her/His Own Actual Factual Knowledge

1. Earthificial Intelligence: A Deep Neural Network Local/Global Computational Phase

This is a new section posted in 2017 to survey and report this “artificial intelligence” or AI turn from a machine computation scheme to new understandings that biological brains as they fluidly form, think, perceive, and gain knowledge can provide a much better model. This revolution parallels and is informed by neuroscience findings of cerebral node/link modularity, net communities, rich hubs, multiplex dynamics of neuron/synapse topologies and emergent cognizance. A prime quality is their self-organized, critically poised, self-corrective, iterative education, and especially their achievement of pattern recognition, as we people do so well. “Deep” means several interactive network layers or phases are in effect, rather than just one level or “shallow” AI.

Another consequence is an increasing use of “artificial” neural net (ANN) techniques to handle vast data inputs from worldwide astronomic, quantum, chemical, genetic and any other research realms. They are also aid studies of life’s organismic physiologies and evolutionary course, along with social media, behavioral, traffic, populace, and economic activities. Bayesian methods of sequentially optimizing probabilities for good enough answers are often used in concert. These citations survey this growing collaborative advance, see e.g. Quantum Codes from Neural Networks (Bausch). They also bode well for another window on the discovery of a natural universality (section IV. B) as brains, genomes, quantum phenomena, creatures, societies, literary corpora, and all else become treatable by the one, same exemplary “-omics” code.

And in accord with our website premise that an emergent cumulative transition is underway to a sapient personsphere, we take license to dub this movement as an Earthificial Intelligence. If scientists and scholars are presently applying neural architectures and capabilities to advance their local and global projects, these endeavors could appear as the evidential guise of a bicameral noosphere. Rather than an invasive technical artifice and/or singularity which might take off by itself, this prodigious progeny could be appreciated as learning and educating on her/his own.

2020: As the intro describes, since 2015 a total revision from ineffective machine methods to a biological, human brain based approach has occurred. The advance was facilitated by concurrent neuroscience findings about the multiplex capacities of our cerebral faculties. It is then notable that “artificial” neural nets have found much analytic utility from quantum to galactic studies, which is another indication of nature’s common recurrence. As this AI frontier expands, it is under scrutiny over misuses and abuses, along with scientific, medical and social benefits. One might even broach an “Earthificial” intelligence going forward within a nascent planetary cognizance.

Bahri, Yasaman, et al. Statistical Mechanics of Deep Learning. Annual Review of Condensed Matter Physics. 11/501, 2020.

Bausch, Johannes and Felix Leditsky. Quantum Codes from Neural Networks. New Journal of Physics. 22/023005, 2020.

Botvinick, Matthew. Realizing the Promise of AI: A New Challenge for Cognitive Science. Trends in Cognitive Sciences. 26/12, 2022.

Chantada, Augusto, et al. Cosmological Informed Neural Networks to Solve the Background Dynamics of the Universe. arXiv:2205.02945.

Hayasaki, Erika. Women vs. the Machine. Foreign Policy. Jan/Feb, 2017.

Krenn, Mario, et al. On Scientific Understanding with Artificial Intelligence. arXiv:2204.01467.

Manyika, James, ed. AI & Society. Daedulus. Spring 2022.

Mitchell, Melanie. What Does It Mean to Align AI with Human Values? Quanta. December 13, 2022.

Ohler, Simon, et al. Towards Learning Self-Organized Criticality of Rydberg Atoms using Graph Neural Networks. arXiv:2207.08927.

Russell, Stuart. Human Compatible: Artificial Intelligence and the Problem of Control. New York: Penguin, 2020.

Sejnowski, Terrence. The Deep Learning Revolution. Cambridge: MIT Press, 2018.

Spraque, Kyle, et al. Watch and Learn – A Generalized Approach for Transferrable Learning in Deep Neural Networks via Physical Principles. Machine Learning: Science and Technology. 2/2, 2021.

Tzachor, Asaf, et al. Artificial Intelligence in a Crisis Needs Ethics with Urgency. Nature Machine Intelligence. . 2/365, 2020.

Weng, Kangyu, et al. Statistical Physics of Deep Neural Networks. arXiv:2212.01744.

Wood, Charlie. How to Make the Universe Think for Us. Quanta. June 1, 2022.

2023:

Can We Build a Brain?. www.pbs.org/video/nova-wonders-can-we-build-a-brain-j53aqg. This April 2018 NOVA Wonders program provides a brilliant introductory survey of these active frontiers of Artificial Intelligence and Deep Neural Network Learning. An extraordinary array of contributors such as Fei Fei Li, Christoph Koch, Rodney Brooks, DeepMind experts, to cite a few, and especially Rana El Kaliouby, reveal a grand project with immense promise for peoples and planet if it can be respectfully guided and carried forth.

Information-Theoretic Approaches in Deep Learning. www.mdpi.com/journal/entropy/special_issues/deep_learning. This page is an announcement about a special issue planned for the popular online MDPI Entropy site, which is open for manuscripts until December 2018. It is conceived and edited by Deniz Gencaga, a Antalya Bilim University, Turkey, professor of electrical engineering.

Deep Learning (DL) has revolutionized machine learning especially in the last decade. As a benefit of this unprecedented development, we are capable of working with very large Neural Networks (NNs), composed of multiple layers (Deep Neural Networks), in many applications, such as object recognition-detection, speech recognition and natural language processing. Although many Convolutive Neural Network (CNN) and Recurrent Neural Network (RNN) based algorithms have been proposed, a comprehensive theoretical understanding of DNNs remains to be a major research area. Recently, we have seen an increase in the number of approaches that are based on information-theoretic concepts, such as Mutual Information. In this Special Issue, we would like to collect papers focusing on both the theory and applications of information-theoretic approaches for Deep Learning. The application areas are diverse and some of them include object tracking/detection, speech recognition, natural language processing, neuroscience, bioinformatics, engineering, finance, astronomy, and Earth and space sciences.

Is AI Extending the Mind?. www.crosslabs.org/workshop-2022. A virtual workshop held on April 11 – 15, 2022 with video presentations such as On AI & Ecosystems by Alan Dorin, On Enactive AI by Tom Froese & Dobromir Dotov, and On Autonomous Agents and Semantic Information by Artemy Kolchinsky.

Power and Limits of Artificial Intelligence. www.pas.va/content/accademia/en/publications/scriptavaria/artificial_intelligence. A site for the Proceedings of a Pontifical Academy of Sciences workshop held in late 2016 on this advance and concern. A premier array of neuroscience and computer scientists such as Stanislas Dehaene, Wolf Singer, Yann LeCun, Patricia Churchland, Demis Hassabis, and Elizabeth Spelke spoke, whose presentations both in video and text are available on this site. Search also Dehaene 2017 for a major paper in Science (358/486) as a follow up on his talk and this event.

Aggarwal, Charu. Neural Networks and Deep Learning. International: Springer, 2018. The IBM Watson Center senior research member provides a latest copious textbook for this active revolution. Ten chapters go from the AI machine advance to brain and behavior based methods, onto features such as training, regularization, linear/logistic regression, matrix factorization, along with neural Turing machines, Kohonen self-organizing maps, recurrent and convolutional nets.

Alexander, Victoria, et al. Living Systems are Smarter Bots: Slime Mold Semiosis versus AI Symbol Manipulation. Biosystems. August, 2021. Within generous biosemiotic literacies that perceive living nature as graced with an incarnate intelligence and narrative scriptome, ITMO University, Russia, Dactyl Foundation, NYC and Autonomous University of Barcelona scholars describe how simple microbial realms (in body, not mind) can express cognitive abilities far beyond lumpen machines.

Alser, Mohammed, et al. Going from Molecules to Genomic Variations to Scientific Discovery. arXiv:2205.07957. We cite this entry by an eight person ETH Zurich team to record a dedicated project to access the latest deep learning techniques so as to achieve a realm of Iintelligent Algorithms and Architectures (hardware) for next generation sequencing needs.

A great need now exists to intelligently read, analyze, and interpret our genomes not more quickly, but accurately and efficiently enough to scale to population levels. Here we describe much improved genome studies by way of novel AI algorithms and architectures. Algorithms can access genomic structures as well as the underlying hardware. We move onto future challenges, benefits, and research directions opened by new sequencing technologies and specialized hardware chips. (Excerpt)

Anshu, Anurag, et al. Sample-efficient Learning of Interacting Quantum Systems.. Nature Physics. 17/8, 2021. We cite this entry by UC Berkeley, IBM Watson Research, RIKEN Center, Tokyo, and MIT researchers as an example of how AI studies are becoming amenable even to this deepest, foundational realm. Once again a grand ecosmic endeavor seems to be its own internal self-description, so that maybe whomever sapiensphere is able to do this can begin a new intentional creation from here.

Learning the Hamiltonian that describes interactions in both condensed-matter physics and the verification of quantum technologies is an important task. Previously, the best methods for quantum Hamiltonian learning with able performance required measurements that scaled exponentially with the number of particles. Here we prove that only a polynomial number of local measurements on the thermal state of a quantum system are necessary for accurately learning its Hamiltonian. The framework we introduce provides a theoretical foundation for applying machine learning techniques to achieve a long-sought goal in quantum statistical learning. (Abstract excerpt)

Hamiltonian function, also called Hamiltonian, is a mathematical definition introduced in 1835 by Sir William Rowan Hamilton to express the rate of change the condition of a dynamic physical system, such as a set of moving particles.

Aragon-Calvo, MIguel. Classifying the Large Scale Structure of the Universe with Deep Neural Networks. arXiv:1804.00816. We cite this posting by a National Autonomous University of Mexico astronomer as an example of how such novel brain-based methods are being applied to even quantify these celestial reaches. By this work and many similar entries, might our Earthwise sapiensphere be perceived as collectively beginning to quantify the whole multiverse? Could it also allude a sense of an affine nature as a cerebral, connectome cosmos? See also, e.g., An Algorithm for the Rotation Count of Pulsars at 1802.0721.

Bahri, Yasaman, et al. Statistical Mechanics of Deep Learning. Annual Review of Condensed Matter Physics. 11/501, 2020. Google Brain and Stanford University researchers scope out ways to root neural-like networks as they come pervade and apply everywhere into an increasingly conducive physical phenomena. We add that an implication might be a nascent sense of a cerebral cosmos trying to achieve its self-witness and re-presentation via our globally capacious intellect.

The recent success of deep neural networks in machine learning raises deep questions about underlying theoretical principles. We methods of interactive physical analysis rooted in statistical mechanics which have begun to yield conceptual connections between deep learning and diverse physical and mathematical topics, including random landscapes, spin glasses, jamming, dynamical phase transitions, chaos, Riemannian geometry, random matrix theory, free probability, and nonequilibrium phases. (Abstract excerpt)

Baldi, Pierre. Deep Learning in Biomedical Data Science. Annual Review of Biomedical Data Science. Vol. 1, 2018. A UC Irvine, School of Information and Computer Sciences, Institute for Genomics and Bioinformatics, researcher introduces ways that artificial neural network advances can serve pattern finding and diagnostic needs across many realms of big biological and medical data analysis and synthesis.

Since the 1980s, deep learning and biomedical data have been coevolving and feeding each other. The breadth, complexity, and rapidly expanding size of biomedical data have stimulated the development of novel deep learning methods, and application of these methods to biomedical data have led to scientific discoveries and practical solutions. This overview provides technical and historical pointers to the field, and surveys current applications of deep learning to biomedical data organized around five subareas, roughly of increasing spatial scale: chemoinformatics, proteomics, genomics and transcriptomics, biomedical imaging, and health care. (Abstract)

Bausch, Johannes and Felix Leditsky. Quantum Codes from Neural Networks. New Journal of Physics. 22/023005, 2020. We cite this paper by Cambridge University and University of Colorado computational physicists as a gppd instance of how readily cerebral architectures can be effectively applied acrpss far-removed domains. These common transfers open another window upon a universal, iconic bipartite (node/link) and triune (whole brain, genome, etc.) nature.

We examine the usefulness of applying neural networks as a variational state ansatz (approach) for many-body quantum systems for quantum information-processing tasks. In the neural network state, the complex amplitude function of a quantum state is computed. The resulting multipartite entanglement structure can describe the unitary dynamics of physical systems of interest. Here we show that neural networks can efficiently represent quantum codes for information transmission. Our main points are: a) Neural networks yield quantum codes with high coherent information for two important quantum channels, b) For the depolarizing channel, they find the best repetition codes and, c) Neural networks cam represent a special type of quantum error-correcting codes. (Abstract excerpt)

1 | 2 | 3 | 4 | 5 | 6 | 7 | 8  Next