II. Planetary Prodigy: A Sapiensphere Comes to Her/His Own Knowledge
1. AI: A Survey of Deep Neural Network Learning
Lin, Henry and Max Tegmark. Why does Deep and Cheap Learning Work so Well?. arXiv:1608.08225. The Harvard and MIT polymaths review the recent successes of these neural net, multiscale, algorithmic operations (definitions vary) from a statistical physics context such as renormalization groups and symmetric topologies. (Intelligent Evolution)
Liu, Weibo, et al. A Survey of Deep Neural Network Architectures and their Applications. Neurocomputing. 234/11, 2017. As the Abstract cites, Brunel University, London, Xiamen University, Yangzhou University, and King Abdulaziz University, Jeddah, computer engineers provide a wide-ranging tutorial on these increasingly useful cognitive methods.
Since the proposal of a fast learning algorithm for deep belief networks in 2006, the deep learning techniques have drawn ever-increasing research interests because of their inherent capability of overcoming the drawback of traditional algorithms dependent on hand-designed features. Deep learning approaches have also been found to be suitable for big data analysis with successful applications to computer vision, pattern recognition, speech recognition, natural language processing, and recommendation systems. In this paper, we discuss some widely-used deep learning architectures and their practical applications. An up-to-date overview is provided on four deep learning architectures, namely, autoencoder, convolutional neural network, deep belief network, and restricted Boltzmann machine. Different types of deep neural networks are surveyed and recent progresses are summarized. (Abstract)
Lucie-Smith, Luisa, et al. Machine Learning Cosmological Structure Formation. arXiv:1802.04271. We cite this entry by University College London astrophysicists including Hiranya Peiris as an example of the widest range that a new cerebral-based artificial intelligence methods can be applied. If to reflect, whom is this person/sapiensphere prodigy to so proceed as the universe’s way of achieving its own self-quantified description?
Maheswaranathan, Niru, et al. Universality and Individuality in Neural Dynamics across Large Populations of Recurrent Networks. arXiv:1907.08549. By virtue of the latest sophistications, Google Brain and Stanford University AI researchers are able to discern and report “representational similarities” between “biological and artificial networks.” These qualities are then seen in effect across an array of personal and communal affinities.
Manzalino, Antonio. Complex Deep Learning with Quantum Optics. Quantum Reports. 1/1, 2019. In this new MDPI online journal, a senior manager in the Innovation Dept. of Telecom Italia Mobile (TIM), bio below, advances the frontiers of this current assimilation of a lively quantum cosmos with human neural net cognizance. See also, e.g., a cited reference, Quantum Fields as Deep Learning, by Jae-Weon Lee at arXiv:1708.07408. While a prior physics mindset worries over an opaque strangeness, into these later 2010s, via instant global collaborations, a profound new understanding and treatment becomes possible.
The rapid push towards telecommunications infrastructures such as 5G capacity and the Internet drives a strong interest for artificial intelligence (AI) methods, systems, and networks. Processing big data to infer patterns at high speeds with low power consumption is a central technological challenge. Today, an emerging research field rooted in quantum optics along with deep neural networks (DNNs) and nanophotonics are cross-informing each other. This paper elaborates on these topics and proposes a theoretical architecture for a Complex DNN made from programmable metasurfaces. An example is provided which shows a correspondence between the equivariance of convolutional neural networks and the invariance principle of gauge transformations. (Abstract)
Marchetti, Tomasso, et al. An Artificial Neural Network to Discovery Hypervelocity Stars. arXiv:1704.07990. An eight member European astrophysicist team finds this cerebral procedure to be a fruitful way to distill results of the myriad data findings of the Gaia space telescope mission. Once again, we note how such a collaboration may appear as a worldwide sapiensphere proceeding to learn on her/his own.
The paucity of hypervelocity stars (HVSs) known to date has severely hampered their potential to investigate the stellar population of the Galactic Centre and the Galactic Potential. The first Gaia data release gives an opportunity to increase the current sample. The challenge is of course the disparity between the expected number of hypervelocity stars and that of bound background stars (around 1 in 106). We have applied a novel data mining algorithm based on machine learning techniques, an artificial neural network, to the Tycho-Gaia astrometric solution (TGAS) catalogue. (Abstract excerpt)
Mathuriya, Amrita, et al. CosmoFlow: Using Deep Learning to Learn the Universe at Scale. arXiv:1808.04728. As the brain-based AI revolution proceeds, seventeen authors from Intel, LBNL, Cray and UC Berkeley scope out their neural network applications, as being done everywhere else, across the celestial raiment. Indeed, as this realm becomes similarly amenable, one might get the impression that the whole cosmos is somehow cerebral, or genomic in nature.
Deep learning is a promising tool to determine the physical model that describes our universe. To handle the considerable computational cost of this problem, we present CosmoFlow: a highly scalable deep learning application built on top of the TensorFlow framework. CosmoFlow uses efficient implementations of 3D convolution and pooling primitives, together with improvements in threading for many element-wise operations, to improve training performance on Intel(C) Xeon Phi(TM) processors. We also utilize the Cray PE Machine Learning Plugin for efficient scaling to multiple nodes. To our knowledge, this is the first large-scale science application of the TensorFlow framework at supercomputer scale with fully-synchronous training. (Abstract)
Mehta, Pankaj and David Schwab. An Exact Mapping between the Variational Renormalization Group and Deep Learning. arXiv:1410.3831. We cite this entry because Boston University and Northwestern University physicists show a common affinity between this intelligent neural net method and dynamic physical qualities. So we muse, could it be imagined that cosmic nature may be in some actual way cerebral in kind, engaged in a grand educative experience. One is reminded of the British physicist James Jeans’ 1930 quote The universe begins to look more like a great thought than like a great machine. See also Machine Learning meets Quantum State Preparation at >arXiv:1705.00565.
Deep learning is a broad set of techniques that uses multiple layers of representation to automatically learn relevant features directly from structured data. Recently, such techniques have yielded record-breaking results on a diverse set of difficult machine learning tasks in computer vision, speech recognition, and natural language processing. Despite the enormous success of deep learning, relatively little is understood theoretically about why these techniques are so successful at feature learning and compression. Here, we show that deep learning is intimately related to one of the most important and successful techniques in theoretical physics, the renormalization group (RG). RG is an iterative coarse-graining scheme that allows for the extraction of relevant features (i.e. operators) as a physical system is examined at different length scales. Our results suggests that deep learning algorithms may be employing a generalized RG-like scheme to learn relevant features from data. (Abstract)
Min, Seonwoo, et al. Deep Learning in Bioinformatics. Briefings in Bioinformatics. 185, 2017. Seoul National University biologists present a tutorial survey of this novel union of profuse big data and deep neural net capabilities as they may serve studies of life’s informational essence. See also, e.g., Deep Learning for Computational Biology in Molecular Systems Biology (12/878, 2016.
Here, we review deep learning in bioinformatics. To provide a useful and comprehensive perspective, we categorize research both by the bioinformatics domain (i.e. omics, biomedical imaging, biomedical signal processing) and deep learning architecture (i.e. deep neural networks, convolutional neural networks, recurrent neural networks, emergent architectures) and present brief descriptions of each study. Additionally, we discuss theoretical and practical issues and suggest future research directions. We believe that this review will provide valuable insights and serve as a starting point for researchers to apply deep learning approaches in their bioinformatics studies. (Abstract excerpts)
Mudhsh, Mohammed and Rolla Almodfer. Arabic Handwritten Alphanumeric Character Recognition Using Very Deep Neural Network. Information. Online August, 2017. We record this entry by Wuhan University of Technology computer scientists to convey a nascent worldwide knowledge which is lately able to parse ancient scriptures by way of 21st century computational methods. A further notice is that a use of these dynamic cerebral architectures could well imply a global brain coming to its own retrospective and prospect.
The traditional algorithms for recognizing handwritten alphanumeric characters are dependent on hand-designed features. In recent days, deep learning techniques have brought about new breakthrough technology for pattern recognition applications, especially for handwritten recognition. However, deeper networks are needed to deliver state-of-the-art results in this area. In this paper, inspired by the success of the very deep state-of-the-art VGGNet, we propose Alphanumeric VGG net for Arabic handwritten alphanumeric character recognition. Alphanumeric VGG net is constructed by thirteen convolutional layers, two max-pooling layers, and three fully-connected layers. We have achieved very promising results, with a validation accuracy of 99.66% for the ADBase database and 97.32% for the HACDB database. (Abstract)
Palazzi, Maria, et al. Online Division of Labour: Emergent Structures in Open Source Software. arXiv:1903.03375. Internet Interdisciplinary Institute, Open University of Catalonia computer theorists report that even group-wide developments of computational codes can be seen to take on and follow a common course as all other organic assemblies. A further implication, we add, would be another perception that cosmic evolutionary nature draws upon and repeat this same complex adaptive systems generative program at each and every instance. See also a cited reference Multi-scale Structure and Geographic Drivers of Cross-infection within Marine Bacteria and Phages in the ISME Journal (7/520, 2013) which describes a similar pattern for microbes.
The development Open Source Software fundamentally depends on the participation and commitment of volunteer developers to progress. Several works have presented strategies to increase the on-boarding and engagement of new contributors, but little is known on how these diverse groups of developers self-organise to work together. To understand this, one must consider that, on one hand, platforms like GitHub provide a virtually unlimited development framework: any number of actors can potentially join to contribute in a decentralised, distributed, remote, and asynchronous manner. On the other, however, it seems reasonable that some sort of hierarchy and division of labour must be in place to meet human biological and cognitive limits, and also to achieve some level of efficiency.
Ruff, Matthias. Machine Learning for Quantum Mechanics in a Nutshell. International Journal of Quantum Chemistry. 115/16, 2015. In a special issue on quantum mechanics and machine learning, a Fritz Haber Institute, Berlin, bioinformatician provides an introductory overview. (2015 Universality)