II. Planetary Prodigy: A Global Sapiensphere Learns by Her/His Own Self
1. Earthificial Intelligence: Deep Neural Network Learning
A new section in Spring 2017 to survey and report this “artificial intelligence” or AI turn from a longtime machine computation scheme to realizing that biological brains as they fluidly form, think, perceive, and gain viable knowledge provide a much better model. This revolution parallels and is informed by new neuroscience understandings of cerebral node/link modular, community, rich hub, multiplex dynamics of neuron/synapse topologies and emergent cognizance. A prime quality is a self-organized, critically poised, self-corrective education, especially reproduce our human acumen to recognize patterns. “Deep” means several, interactive network layers or phases, as opposed to one prior level or “shallow” AI.
Can We Build a Brain?. www.pbs.org/video/nova-wonders-can-we-build-a-brain-j53aqg. This April 2018 NOVA Wonders program provides a brilliant introductory survey of these active frontiers of Artificial Intelligence and Deep Neural Network Learning. An extraordinary array of contributors such as Fei Fei Li, Christoph Koch, Rodney Brooks, DeepMind experts, to cite a few, and especially Rana El Kaliouby, reveal a grand project with immense promise for peoples and planet if it can be respectfully guided and carried forth.
Information-Theoretic Approaches in Deep Learning. www.mdpi.com/journal/entropy/special_issues/deep_learning. This page is an announcement about a special issue planned for the popular online MDPI Entropy site, which is open for manuscripts until December 2018. It is conceived and edited by Deniz Gencaga, a Antalya Bilim University, Turkey, professor of electrical engineering.
Deep Learning (DL) has revolutionized machine learning especially in the last decade. As a benefit of this unprecedented development, we are capable of working with very large Neural Networks (NNs), composed of multiple layers (Deep Neural Networks), in many applications, such as object recognition-detection, speech recognition and natural language processing. Although many Convolutive Neural Network (CNN) and Recurrent Neural Network (RNN) based algorithms have been proposed, a comprehensive theoretical understanding of DNNs remains to be a major research area. Recently, we have seen an increase in the number of approaches that are based on information-theoretic concepts, such as Mutual Information. In this Special Issue, we would like to collect papers focusing on both the theory and applications of information-theoretic approaches for Deep Learning. The application areas are diverse and some of them include object tracking/detection, speech recognition, natural language processing, neuroscience, bioinformatics, engineering, finance, astronomy, and Earth and space sciences.
Power and Limits of Artificial Intelligence. www.pas.va/content/accademia/en/publications/scriptavaria/artificial_intelligence. A site for the Proceedings of a Pontifical Academy of Sciences workshop held in late 2016 on this advance and concern. A premier array of neuroscience and computer scientists such as Stanislas Dehaene, Wolf Singer, Yann LeCun, Patricia Churchland, Demis Hassabis, and Elizabeth Spelke spoke, whose presentations both in video and text are available on this site. Search also Dehaene 2017 for a major paper in Science (358/486) as a follow up on his talk and this event.
Aggarwal, Charu. Neural Networks and Deep Learning. International: Springer, 2018. The IBM Watson Center senior research member provides a latest copious textbook for this active revolution. Ten chapters go from the AI machine advance to brain and behavior based methods, onto features such as training, regularization, linear/logistic regression, matrix factorization, along with neural Turing machines, Kohonen self-organizing maps, recurrent and convolutional nets.
Aragon-Calvo, MIguel. Classifying the Large Scale Structure of the Universe with Deep Neural Networks. arXiv:1804.00816. We cite this posting by a National Autonomous University of Mexico astronomer as an example of how such novel brain-based methods are being applied to even quantify these celestial reaches. By this work and many similar entries, might our Earthwise sapiensphere be perceived as collectively beginning to quantify the whole multiverse? Could it also allude a sense of an affine nature as a cerebral, connectome cosmos? See also, e.g., An Algorithm for the Rotation Count of Pulsars at 1802.0721.
Baldi, Pierre. Deep Learning in Biomedical Data Science. Annual Review of Biomedical Data Science. Vol. 1, 2018. A UC Irvine, School of Information and Computer Sciences, Institute for Genomics and Bioinformatics, researcher introduces ways that artificial neural network advances can serve pattern finding and diagnostic needs across many realms of big biological and medical data analysis and synthesis.
Since the 1980s, deep learning and biomedical data have been coevolving and feeding each other. The breadth, complexity, and rapidly expanding size of biomedical data have stimulated the development of novel deep learning methods, and application of these methods to biomedical data have led to scientific discoveries and practical solutions. This overview provides technical and historical pointers to the field, and surveys current applications of deep learning to biomedical data organized around five subareas, roughly of increasing spatial scale: chemoinformatics, proteomics, genomics and transcriptomics, biomedical imaging, and health care. (Abstract)
Beny, Cedric. Deep Learning and the Renormalization Group. arXiv:1301.3124. We cite this instance by a physicist at Leibniz University, Hanover, in 2013, presently at Hanyang University, Seoul, to record how quantum information, the title theory, complex neural networks, computational methods, and more are converging in a fertile process of cross-correlation.
Renormalization group (RG) methods, which model the way in which the effective behavior of a system depends on the scale at which it is observed, are key to modern condensed-matter theory and particle physics. We compare the ideas behind the RG on the one hand and deep machine learning on the other, where depth and scale play a similar role. In order to illustrate this connection, we review a recent numerical method based on the RG - the multiscale entanglement renormalization ansatz (MERA) - and show how it can be converted into a learning algorithm based on a generative hierarchical Bayesian network model. Under the assumption - common in physics - that the distribution to be learned is fully characterized by local correlations, this algorithm involves only explicit evaluation of probabilities, hence doing away with sampling. (Abstract)
Biamonte, Jacob, et al. Quantum Machine Learning. arXiv:1611.09347. A six member team with postings in Malta, Canada, Spain, Sweden, Germany, and the USA, including Seth Lloyd, advance a novel synthesis of recurrent neural net machine processes with quantum phenomena seen to possess algorithmic, complex dynamic system, information processing affinities. (Quantum Complex Systems)
Carleo, Giuseppe and Mathias Troyer. Solving the Quantum Many-Body Problem with Artificial Neural Networks. Science. 355/602, 2017. As the Abstracts notes, ETH Zurich physicists find this generic iterative approach, as it gains utility in many areas, to be an apt method for dealing with and solving such seemingly intractable phenomena. The work merited a report in the same issue as Machine Learning for Quantum Physics (355/580).
The challenge posed by the many-body problem in quantum physics originates from the difficulty of describing the nontrivial correlations encoded in the exponential complexity of the many-body wave function. Here we demonstrate that systematic machine learning of the wave function can reduce this complexity to a tractable computational form for some notable cases of physical interest. We introduce a variational representation of quantum states based on artificial neural networks with a variable number of hidden neurons. A reinforcement-learning scheme we demonstrate is capable of both finding the ground state and describing the unitary time evolution of complex interacting quantum systems. (Abstract)
Carleo, Giuseppe, et al. Machine Learning and the Physical Sciences. arXiv:1903.10563. An eight member international teamwith postings such as Flatiron Institute Center for Computational Quantum Physics (GC), MPI Quantum Optics (Ignacio Cirac) and Maria Schuld (University of KwaZulu-Natal) consider applications of novel deep neural net methods, broadly conceived, across statistical, particle, cosmic, many-body quantum matter, and onto chemical phases. See also NetKet: A Machine Learning Toolkit for Many-Body Quantum Systems at 1904.00031, and Neural Networks take on Open Quantum Systems in Physics Review Letters (122/25, 2019) by this extended group. As the project flourishes, by ready cross-transfers, one gets an inkling of a naturally cerebral ecosmos, just now trying to achieve via reinforcement learnings its own self-description, literacy, realization, and affirmative action going forward.
Machine learning encompasses a broad range of algorithms and modeling tools used for a vast array of data processing tasks, which has entered most scientific disciplines in recent years. We review in a selective way the recent research on the interface between machine learning and physical sciences. This includes conceptual developments in machine learning (ML) motivated by physical insights, applications of machine learning techniques to several domains in physics, and cross-fertilization between the two fields. After giving basic notion of machine learning methods and principles, we describe examples of how statistical physics is used to understand methods in ML. We then move to describe applications of ML methods in particle physics and cosmology, quantum many body physics, quantum computing, and chemical and material physics. We also highlight research and development into novel computing architectures aimed at accelerating ML. In each of the sections we describe recent successes as well as domain-specific methodology and challenges. (Abstract)
Ching, Travers, et al. Opportunities and Obstacles for Deep Learning in Biology and Medicine. Journal of the Royal Society Interface. Vol. 14/Iss. 141, 2018. Some 40 researchers from institutes, laboratories and hospitals in the USA, Canada and the UK well survey current applications, potentials, and problems for this cerebral-based AI revolution. Or as Siddhartha Muckerjee, MD wrote in the New Yorker last year: “the algorithm will see you now.”
Some 40 researchers from institutes, laboratories and hospitals in the USA, Canada and the UK well survey current applications, potentials, and problems for this cerebral-based AI revolution. Or as Siddhartha Muckerjee, MD wrote in the New Yorker last year: “the algorithm will see you now.”
Chung, Audrey, et al. The Mating Rituals of Deep Neural Networks: Learning Compact Feature Representations through Sexual Evolutionary Synthesis. arXiv:1709.02043. University of Waterloo, Canada computer scientists in coauthor Alexander Wong’s medical image laboratory continue their unique efforts to join and root these analytical methods in life’s innate developmental phenomena. Prior entries are Evolution in Groups (search Shafiee) and Deep Learning with Darwin: Evolutionary Synthesis of Deep Neural Networks at 1606.04393.
Evolutionary deep intelligence was recently proposed as a method for achieving highly efficient deep neural network architectures over successive generations. Drawing inspiration from nature, we propose the incorporation of sexual evolutionary synthesis. Rather than the current asexual synthesis of networks, we aim to produce more compact feature representations by synthesizing more diverse and generalizable offspring networks in subsequent generations via the combination of two parent networks. (1709.02043)