(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

II. Planetary Prodigy: A Global Sapiensphere Learns by Her/His Self

1. Earthificial Intelligence: Deep Neural Network Learning

Mathuriya, Amrita, et al. CosmoFlow: Using Deep Learning to Learn the Universe at Scale. arXiv:1808.04728. As the brain-based AI revolution proceeds, seventeen authors from Intel, LBNL, Cray and UC Berkeley scope out their neural network applications, as being done everywhere else, across the celestial raiment. Indeed, as this realm becomes similarly amenable, one might get the impression that the whole cosmos is somehow cerebral, or genomic in nature.

Deep learning is a promising tool to determine the physical model that describes our universe. To handle the considerable computational cost of this problem, we present CosmoFlow: a highly scalable deep learning application built on top of the TensorFlow framework. CosmoFlow uses efficient implementations of 3D convolution and pooling primitives, together with improvements in threading for many element-wise operations, to improve training performance on Intel(C) Xeon Phi(TM) processors. We also utilize the Cray PE Machine Learning Plugin for efficient scaling to multiple nodes. To our knowledge, this is the first large-scale science application of the TensorFlow framework at supercomputer scale with fully-synchronous training. (Abstract)

Deep Learning for Cosmology: The nature of dark energy is one of the most exciting and fundamental questions facing scientists today. Dark energy is the unknown force that is driving the accelerated expansion of the universe, and is the subject of several current and future experiments that will survey the sky in multiple wavelengths. We cannot measure dark energy directly - we can only observe the effect it has on the observable universe. The interplay of gravity (pulling matter together) and dark energy (expanding space itself) is encoded in the distribution of matter in the universe today. Cosmologists typically characterize this distribution using statistical measures of the structure of matter – its “clumpiness” - in the form of two- or three-point correlation functions or other reduced statistics. Methods that capture all features in the distribution of matter (such as deep learning networks) could give greater insight into the nature of dark energy. (1)

Mehta, Pankaj and David Schwab. An Exact Mapping between the Variational Renormalization Group and Deep Learning. arXiv:1410.3831. We cite this entry because Boston University and Northwestern University physicists show a common affinity between this intelligent neural net method and dynamic physical qualities. So we muse, could it be imagined that cosmic nature may be in some actual way cerebral in kind, engaged in a grand educative experience. One is reminded of the British physicist James Jeans’ 1930 quote The universe begins to look more like a great thought than like a great machine. See also Machine Learning meets Quantum State Preparation at arXiv:1705.00565.

Deep learning is a broad set of techniques that uses multiple layers of representation to automatically learn relevant features directly from structured data. Recently, such techniques have yielded record-breaking results on a diverse set of difficult machine learning tasks in computer vision, speech recognition, and natural language processing. Despite the enormous success of deep learning, relatively little is understood theoretically about why these techniques are so successful at feature learning and compression. Here, we show that deep learning is intimately related to one of the most important and successful techniques in theoretical physics, the renormalization group (RG). RG is an iterative coarse-graining scheme that allows for the extraction of relevant features (i.e. operators) as a physical system is examined at different length scales. Our results suggests that deep learning algorithms may be employing a generalized RG-like scheme to learn relevant features from data. (Abstract)

Min, Seonwoo, et al. Deep Learning in Bioinformatics. Briefings in Bioinformatics. 185, 2017. Seoul National University biologists present a tutorial survey of this novel union of profuse big data and deep neural net capabilities as they may serve studies of life’s informational essence. See also, e.g., Deep Learning for Computational Biology in Molecular Systems Biology (12/878, 2016.

Here, we review deep learning in bioinformatics. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies. (Abstract excerpts)

Mudhsh, Mohammed and Rolla Almodfer. Arabic Handwritten Alphanumeric Character Recognition Using Very Deep Neural Network. Information. Online August, 2017. We record this entry by Wuhan University of Technology computer scientists to convey a nascent worldwide knowledge which is lately able to parse ancient scriptures by way of 21st century computational methods. A further notice is that a use of these dynamic cerebral architectures could well imply a global brain coming to its own retrospective and prospect.

The traditional algorithms for recognizing handwritten alphanumeric characters are dependent on hand-designed features. In recent days, deep learning techniques have brought about new breakthrough technology for pattern recognition applications, especially for handwritten recognition. However, deeper networks are needed to deliver state-of-the-art results in this area. In this paper, inspired by the success of the very deep state-of-the-art VGGNet, we propose Alphanumeric VGG net for Arabic handwritten alphanumeric character recognition. Alphanumeric VGG net is constructed by thirteen convolutional layers, two max-pooling layers, and three fully-connected layers. We have achieved very promising results, with a validation accuracy of 99.66% for the ADBase database and 97.32% for the HACDB database. (Abstract)

Palazzi, Maria, et al. Online Division of Labour: Emergent Structures in Open Source Software. arXiv:1903.03375. Internet Interdisciplinary Institute, Open University of Catalonia computer theorists report that even group-wide developments of computational codes can be seen to take on and follow a common course as all other organic assemblies. A further implication, we add, would be another perception that cosmic evolutionary nature draws upon and repeat this same complex adaptive systems generative program at each and every instance. See also a cited reference Multi-scale Structure and Geographic Drivers of Cross-infection within Marine Bacteria and Phages in the ISME Journal (7/520, 2013) which describes a similar pattern for microbes.

The development Open Source Software fundamentally depends on the participation and commitment of volunteer developers to progress. Several works have presented strategies to increase the on-boarding and engagement of new contributors, but little is known on how these diverse groups of developers self-organise to work together. To understand this, one must consider that, on one hand, platforms like GitHub provide a virtually unlimited development framework: any number of actors can potentially join to contribute in a decentralised, distributed, remote, and asynchronous manner. On the other, however, it seems reasonable that some sort of hierarchy and division of labour must be in place to meet human biological and cognitive limits, and also to achieve some level of efficiency.

These latter features (hierarchy and division of labour) should translate into recognisable structural arrangements when projects are represented as developer-file bipartite networks. In this paper we analyse a set of popular open source projects from GitHub, placing the accent on three key properties: nestedness, modularity and in-block nestedness -which typify the emergence of heterogeneities among contributors, the emergence of subgroups of developers working on specific subgroups of files, and a mixture of the two previous, respectively. These analyses show that indeed projects evolve into internally organised blocks. (Abstract excerpts)

To answer these questions, we will look at three structural arrangements which have been identified as signatures of self-organisation in both natural and artificial systems: nestedness (i.e. do projects evolve in a way such that the emergence of generalists and specialists is favoured?); modularity (i.e. do OSS projects split in identifiable compartments, thus avoiding Brook’s law despite the addition of contributors? Are these compartments bounded?); and in-block nestedness (i.e. if bio-cognitive limits and division of labour are in place, do the resulting specialised modules self-organise internally?) (2)

Ruff, Matthias. Machine Learning for Quantum Mechanics in a Nutshell. International Journal of Quantum Chemistry. 115/16, 2015. In a special issue on quantum mechanics and machine learning, a Fritz Haber Institute, Berlin, bioinformatician provides an introductory overview. (2015 Universality)

Schmidhuber, Jurgen. Deep Learning in Neural Networks: An Overview. Neural Networks. 61/2, 2015. A technical tutorial by the University of Lugano, Switzerland expert upon advances in artificial or machine learning techniques, based on how our own brains think. Sophisticated algorithms, multiple processing layers with complex structures, assignment paths, non-linear transformations, and so on are at work as they refer new experiences to prior representations for comparison. See also, for example, Semantics, Representations and Grammars for Deep Learning by David Balduzzi at arXiv:1509.08627. Our interest recalls recent proposals by Richard Watson, Eors Szathamary, et al to appreciate life’s evolution as quite akin to a neural net, connectionist learning process.

Schuman, Catherine, et al. A Survey of Neuromorphic Computing and Neural Networks in Hardware. arXiv:1705.06963. Oak Ridge Laboratory and University of Tennessee researchers provide a copious review of progress from years of machine computation to this novel advance to artificially avail the way our own iterative brains so adeptly recognize shapes and patterns. The moniker Neuromorphic means how fast we can say cat or car.

Neuromorphic computing has come to refer to a variety of brain-inspired computers, devices, and models that contrast the pervasive von Neumann computer architecture. This biologically inspired approach has created highly connected synthetic neurons and synapses that can be used to model neuroscience theories as well as solve challenging machine learning problems. The promise of the technology is to create a brain-like ability to learn and adapt, but the technical challenges are significant, starting with an accurate neuroscience model of how the brain works, to finding materials and engineering breakthroughs to build devices to support these models, to creating a programming framework so the systems can learn, to creating applications with brain-like capabilities. In this work, we provide a comprehensive survey of the research and motivations for neuromorphic computing over its history.

Sejnowski, Terrence. The Deep Learning Revolution. Cambridge: MIT Press, 2018. The renown neuroscientist author has been at the innovative center of the AI computational machine to brain and behavior neural network advance since the 1980s. He recounts his national and worldwide experience with many collaborators in this volume which make it the best general introduction to the field. A great gift for any student, as the author has also been involved with learning how to learn methods for schools. The book is filled with vignettes of Francis Crick, Geoffrey Hinton, Stephen Wolfram, Barbara Oakley, John Hopfield, Sydney Brenner, Christof Koch and others across the years. An example of his interests and reach is as a speaker at the 2016 Grand Challenges in 21st Century Science (Google) in Singapore.

Terrence J. Sejnowski holds the Francis Crick Chair at the Salk Institute for Biological Studies and is a Distinguished Professor at the University of California, San Diego. He was a member of the advisory committee for the Obama administration's BRAIN initiative and is founding President of the Neural Information Processing (NIPS) Foundation. He has published twelve books, including (with Patricia Churchland) The Computational Brain (25th Anniversary Edition, MIT Press).

Shafiee, Mohammad, et al. Evolution in Groups: A Deeper Look at Synaptic Cluster Driven Evolution of Deep Neural Networks. arXiv:1704.02081. Shafiee and Elnaz Barshan, Iranian-Canadian, University of Waterloo, systems engineers, and Alexander Wong, DarwinAI (Waterloo), advance this frontier of a universe to human interpretation via multiplex computational cognitive dynamics. Their novel insight is a biological evolutionary setting by way of a deeper genetic and cerebral intelligence. Nature’s emergent profusion of cerebral nets are then alluded to as a generative offspring. See also by the authors these prior award-winning papers Evolutionary Synthesis of Deep Neural Networks via Synoptic Cluster-driven Genetic Encoding at 1609.01360, and Deep Learning with Darwin at 1606.04393.

A promising paradigm for achieving highly efficient deep neural networks is the idea of evolutionary deep intelligence, which mimics biological evolution processes to progressively synthesize more efficient networks. A crucial design factor in evolutionary deep intelligence is the genetic encoding scheme used to simulate heredity and determine the architectures of offspring networks. In this study, we take a deeper look at the notion of synaptic cluster-driven evolution of deep neural networks which guides the evolution process towards the formation of a highly sparse set of synaptic clusters in offspring networks. Utilizing a synaptic cluster-driven genetic encoding, the probabilistic encoding of synaptic traits considers not only individual synaptic properties but also inter-synaptic relationships within a deep neural network. (Abstract)

Shallue, Christopher and Andrew Vanderburg. Identifying Exoplanets with Deep Learning. arXiv:1712.05044. With A Five Planet Resonant Chain Around Kepler-80 and an Eighth Planet Around Kepler-90 subtitle, a Google Brain software engineer and a UT Austin astronomer report a novel application of artificial machine intelligence to successfully analyze huge data inputs from this planet-finder satellite. The achievement received wide press notice, along with a December 14 conference: NASA and Google to Announce AI Breakthrough.

NASA's Kepler Space Telescope was designed to determine the frequency of Earth-sized planets orbiting Sun-like stars, but these planets are on the very edge of the mission's detection sensitivity. Accurately determining the occurrence rate of these planets will require automatically and accurately assessing the likelihood that individual candidates are indeed planets, even at low signal-to-noise ratios. We present a method for classifying potential planet signals using deep learning, a class of machine learning algorithms that have recently become state-of-the-art in a wide variety of tasks. We train a deep convolutional neural network to predict whether a given signal is a transiting exoplanet or a false positive caused by astrophysical or instrumental phenomena.

We apply our model to a new set of candidate signals that we identified in a search of known Kepler multi-planet systems. We statistically validate two new planets that are identified with high confidence by our model. One of these planets is part of a five-planet resonant chain around Kepler-80, with an orbital period closely matching the prediction by three-body Laplace relations. The other planet orbits Kepler-90, a star which was previously known to host seven transiting planets. Our discovery of an eighth planet brings Kepler-90 into a tie with our Sun as the star known to host the most planets. (Abstract)

Sheneman, Leigh and Arend Hintze. Evolving Autonomous Learning in Cognitive Networks. Nature Scientific Reports. 7/16712, 2017. Michigan State University computer scientists post an example of the on-going revision of artificial intelligence, broadly conceived, from decades of dead mechanisms to be in vital accord with evolutionary cerebral architectures and activities. See also The Role of Conditional Independence in the Evolution of Intelligence Systems from this group including Larissa Albantakis at arXiv:1801.05462.

There are two common approaches for optimizing the performance of a machine: genetic algorithms and machine learning. A genetic algorithm is applied over many generations whereas machine learning works by applying feedback until the system meets a performance threshold. These methods have been previously combined, particularly in artificial neural networks using an external objective feedback mechanism. We adapt this approach to Markov Brains, which are evolvable networks of probabilistic and deterministic logic gates. We show that Markov Brains can incorporate these feedback gates in such a way that they do not rely on an external objective feedback signal, but instead can generate internal feedback that is then used to learn. This results in a more biologically accurate model of the evolution of learning, which will enable us to study the interplay between evolution and learning. (Abstract)

Previous   1 | 2 | 3 | 4 | 5  Next