
II. Pedia Sapiens: A Planetary Progeny Comes to Her/His Own Actual Factual Knowledge1. Earthificial Intelligence: A Deep Neural Network Planetary Computational Science Begins
This is a new section posted in 2017 to survey and report this “artificial intelligence” or AI turn from a machine computation scheme to new understandings that biological brains as they fluidly form, think, perceive, and gain knowledge can provide a much better model. This revolution parallels and is informed by neuroscience findings of cerebral node/link modularity, net communities, rich hubs, multiplex dynamics of neuron/synapse topologies and emergent cognizance. A prime quality is their selforganized, critically poised, selfcorrective, iterative education, and especially their achievement of pattern recognition, as we people do so well. “Deep” means several interactive network layers or phases are in effect, rather than just one level or “shallow” AI. Can We Build a Brain?. www.pbs.org/video/novawonderscanwebuildabrainj53aqg. This April 2018 NOVA Wonders program provides a brilliant introductory survey of these active frontiers of Artificial Intelligence and Deep Neural Network Learning. An extraordinary array of contributors such as Fei Fei Li, Christoph Koch, Rodney Brooks, DeepMind experts, to cite a few, and especially Rana El Kaliouby, reveal a grand project with immense promise for peoples and planet if it can be respectfully guided and carried forth. InformationTheoretic Approaches in Deep Learning. www.mdpi.com/journal/entropy/special_issues/deep_learning. This page is an announcement about a special issue planned for the popular online MDPI Entropy site, which is open for manuscripts until December 2018. It is conceived and edited by Deniz Gencaga, a Antalya Bilim University, Turkey, professor of electrical engineering. Deep Learning (DL) has revolutionized machine learning especially in the last decade. As a benefit of this unprecedented development, we are capable of working with very large Neural Networks (NNs), composed of multiple layers (Deep Neural Networks), in many applications, such as object recognitiondetection, speech recognition and natural language processing. Although many Convolutive Neural Network (CNN) and Recurrent Neural Network (RNN) based algorithms have been proposed, a comprehensive theoretical understanding of DNNs remains to be a major research area. Recently, we have seen an increase in the number of approaches that are based on informationtheoretic concepts, such as Mutual Information. In this Special Issue, we would like to collect papers focusing on both the theory and applications of informationtheoretic approaches for Deep Learning. The application areas are diverse and some of them include object tracking/detection, speech recognition, natural language processing, neuroscience, bioinformatics, engineering, finance, astronomy, and Earth and space sciences. Is AI Extending the Mind?. www.crosslabs.org/workshop2022. A virtual workshop held on April 11 – 15, 2022 with video presentations such as On AI & Ecosystems by Alan Dorin, On Enactive AI by Tom Froese & Dobromir Dotov, and On Autonomous Agents and Semantic Information by Artemy Kolchinsky. Power and Limits of Artificial Intelligence. www.pas.va/content/accademia/en/publications/scriptavaria/artificial_intelligence. A site for the Proceedings of a Pontifical Academy of Sciences workshop held in late 2016 on this advance and concern. A premier array of neuroscience and computer scientists such as Stanislas Dehaene, Wolf Singer, Yann LeCun, Patricia Churchland, Demis Hassabis, and Elizabeth Spelke spoke, whose presentations both in video and text are available on this site. Search also Dehaene 2017 for a major paper in Science (358/486) as a follow up on his talk and this event. Aggarwal, Charu. Neural Networks and Deep Learning. International: Springer, 2018. The IBM Watson Center senior research member provides a latest copious textbook for this active revolution. Ten chapters go from the AI machine advance to brain and behavior based methods, onto features such as training, regularization, linear/logistic regression, matrix factorization, along with neural Turing machines, Kohonen selforganizing maps, recurrent and convolutional nets. Alexander, Victoria, et al. Living Systems are Smarter Bots: Slime Mold Semiosis versus AI Symbol Manipulation. Biosystems. August, 2021. Within generous biosemiotic literacies that perceive living nature as graced with an incarnate intelligence and narrative scriptome, ITMO University, Russia, Dactyl Foundation, NYC and Autonomous University of Barcelona scholars describe how simple microbial realms (in body, not mind) can express cognitive abilities far beyond lumpen machines. Alser, Mohammed, et al. Going from Molecules to Genomic Variations to Scientific Discovery. arXiv:2205.07957. We cite this entry by an eight person ETH Zurich team to record a dedicated project to access the latest deep learning techniques so as to achieve a realm of Iintelligent Algorithms and Architectures (hardware) for next generation sequencing needs. A great need now exists to intelligently read, analyze, and interpret our genomes not more quickly, but accurately and efficiently enough to scale to population levels. Here we describe much improved genome studies by way of novel AI algorithms and architectures. Algorithms can access genomic structures as well as the underlying hardware. We move onto future challenges, benefits, and research directions opened by new sequencing technologies and specialized hardware chips. (Excerpt) Anshu, Anurag, et al. Sampleefficient Learning of Interacting Quantum Systems.. Nature Physics. 17/8, 2021. We cite this entry by UC Berkeley, IBM Watson Research, RIKEN Center, Tokyo, and MIT researchers as an example of how AI studies are becoming amenable even to this deepest, foundational realm. Once again a grand ecosmic endeavor seems to be its own internal selfdescription, so that maybe whomever sapiensphere is able to do this can begin a new intentional creation from here. Learning the Hamiltonian that describes interactions in both condensedmatter physics and the verification of quantum technologies is an important task. Previously, the best methods for quantum Hamiltonian learning with able performance required measurements that scaled exponentially with the number of particles. Here we prove that only a polynomial number of local measurements on the thermal state of a quantum system are necessary for accurately learning its Hamiltonian. The framework we introduce provides a theoretical foundation for applying machine learning techniques to achieve a longsought goal in quantum statistical learning. (Abstract excerpt) AragonCalvo, MIguel. Classifying the Large Scale Structure of the Universe with Deep Neural Networks. arXiv:1804.00816. We cite this posting by a National Autonomous University of Mexico astronomer as an example of how such novel brainbased methods are being applied to even quantify these celestial reaches. By this work and many similar entries, might our Earthwise sapiensphere be perceived as collectively beginning to quantify the whole multiverse? Could it also allude a sense of an affine nature as a cerebral, connectome cosmos? See also, e.g., An Algorithm for the Rotation Count of Pulsars at 1802.0721. Bahri, Yasaman, et al. Statistical Mechanics of Deep Learning. Annual Review of Condensed Matter Physics. 11/501, 2020. Google Brain and Stanford University researchers scope out ways to root neurallike networks as they come pervade and apply everywhere into an increasingly conducive physical phenomena. We add that an implication might be a nascent sense of a cerebral cosmos trying to achieve its selfwitness and representation via our globally capacious intellect. The recent success of deep neural networks in machine learning raises deep questions about underlying theoretical principles. We methods of interactive physical analysis rooted in statistical mechanics which have begun to yield conceptual connections between deep learning and diverse physical and mathematical topics, including random landscapes, spin glasses, jamming, dynamical phase transitions, chaos, Riemannian geometry, random matrix theory, free probability, and nonequilibrium phases. (Abstract excerpt) Baldi, Pierre. Deep Learning in Biomedical Data Science. Annual Review of Biomedical Data Science. Vol. 1, 2018. A UC Irvine, School of Information and Computer Sciences, Institute for Genomics and Bioinformatics, researcher introduces ways that artificial neural network advances can serve pattern finding and diagnostic needs across many realms of big biological and medical data analysis and synthesis. Since the 1980s, deep learning and biomedical data have been coevolving and feeding each other. The breadth, complexity, and rapidly expanding size of biomedical data have stimulated the development of novel deep learning methods, and application of these methods to biomedical data have led to scientific discoveries and practical solutions. This overview provides technical and historical pointers to the field, and surveys current applications of deep learning to biomedical data organized around five subareas, roughly of increasing spatial scale: chemoinformatics, proteomics, genomics and transcriptomics, biomedical imaging, and health care. (Abstract) Bausch, Johannes and Felix Leditsky. Quantum Codes from Neural Networks. New Journal of Physics. 22/023005, 2020. We cite this paper by Cambridge University and University of Colorado computational physicists as a gppd instance of how readily cerebral architectures can be effectively applied acrpss farremoved domains. These common transfers open another window upon a universal, iconic bipartite (node/link) and triune (whole brain, genome, etc.) nature. We examine the usefulness of applying neural networks as a variational state ansatz (approach) for manybody quantum systems for quantum informationprocessing tasks. In the neural network state, the complex amplitude function of a quantum state is computed. The resulting multipartite entanglement structure can describe the unitary dynamics of physical systems of interest. Here we show that neural networks can efficiently represent quantum codes for information transmission. Our main points are: a) Neural networks yield quantum codes with high coherent information for two important quantum channels, b) For the depolarizing channel, they find the best repetition codes and, c) Neural networks cam represent a special type of quantum errorcorrecting codes. (Abstract excerpt)
1  2  3  4  5  6  7 Next


HOME 
TABLE OF CONTENTS 
Introduction 
GENESIS VISION 
LEARNING PLANET 
ORGANIC UNIVERSE 