II. Planetary Prodigy: A Global Sapiensphere Learns by Her/His Self
1. Earthificial Intelligence: Deep Neural Network Learning
Sheneman, Leigh and Arend Hintze. Evolving Autonomous Learning in Cognitive Networks. Nature Scientific Reports. 7/16712, 2017. Michigan State University computer scientists post an example of the on-going revision of artificial intelligence, broadly conceived, from decades of dead mechanisms to be in vital accord with evolutionary cerebral architectures and activities. See also The Role of Conditional Independence in the Evolution of Intelligence Systems from this group including Larissa Albantakis at arXiv:1801.05462.
There are two common approaches for optimizing the performance of a machine: genetic algorithms and machine learning. A genetic algorithm is applied over many generations whereas machine learning works by applying feedback until the system meets a performance threshold. These methods have been previously combined, particularly in artificial neural networks using an external objective feedback mechanism. We adapt this approach to Markov Brains, which are evolvable networks of probabilistic and deterministic logic gates. We show that Markov Brains can incorporate these feedback gates in such a way that they do not rely on an external objective feedback signal, but instead can generate internal feedback that is then used to learn. This results in a more biologically accurate model of the evolution of learning, which will enable us to study the interplay between evolution and learning. (Abstract)
Silver, David, et al. Mastering the Game of Go without Human Knowledge. Nature. 550/354, 2017. An 18 member team (all male) from the Google’s DeepMind London artificial intelligence group including founder Demis Hassabis and AlphaGo European winner Fan Hui enhance the capabilities of their neural network learning programs. With regard to the second quote for the gist of the paper, these algorithmic, reinforcement methods appear as a microcosm of an ascendant, self-reinforcing evolutionary education as it may at last reach a consummate worldwise sapience. While we are wary of game metaphors, a vital truth could be gleaned. What am I trying to say – to wit that a universe to human quickening procreation seems like a game that plays itself. In regard, it may be the case that only one sentient ovoplanet is needed to achieve its self-observation, and realization, so as in this venue, “to log on to itself.” While life’s course is a long slog of stochastic chance, rife with injustice and tragedy, it is a game that yet can be won. As Great Earth, Natural Algorithms, Cosmo Opus and elsewhere try to evoke, our Geonate moment may give us an opportunity to be the fittest people and planet by virtue of a Cosmonate act of self-selection and continuance.
A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo. (Abstract)
Soltoggio, Andrea, et al. Born to Learn: the Inspiration, Progress, and Future of Evolved Plastic Artificial Neural Networks. Neural Networks. 108/48, 2018. Loughborough University, University of Central Florida and University of Copenhagen computer scientists draw upon the evolutionary and biological origins of this ubiquitous multicomplex learning system to achieve further understandings and usages. Their theme is that life’s temporal development seems to be a learning, neuromodulation, plasticity, and discovery progression. The approach is seen as akin to the Evolutionary Neurodynamics school of Richard Watson, et al, see section V.C. See also herein Evolution in Groups: A Deeper Look at Synaptic Cluster Driven Evolution of Deep Neural Networks (M. Shafiee) and other similar entries.
Biological neural networks are systems of extraordinary computational capabilities shaped by evolution, development, and lifetime learning. Inspired by such intricate natural phenomena, Evolved Plastic Artificial Neural Networks (EPANNs) use simulated evolution in-silico to breed plastic neural networks, artificial systems composed of sensors, outputs, and plastic components that change in response to sensory-output experiences in an environment. These systems may reveal key algorithmic ingredients of adaptation, autonomously discover novel adaptive algorithms, and lead to hypotheses on the emergence of biological adaptation. In particular, the limitations of hand-designed structures and algorithms currently used in most deep neural networks could be overcome by more flexible and innovative solutions. This paper brings together a variety of inspiring ideas that define the field of EPANNs. (Abstract)
Stanley, Kenneth, et al. Designing Neural Networks through Neuroevolution. Nature Machine Intelligence. January, 2019. Uber AI Labs, San Francisco researchers including Jeff Clune provide a tutorial to date for this active field to intentionally but respectfully facilitate external cognitive facilities. See also is this new journal and issue Evolving Embodies Intelligence from Materials to Machines by Davie Howard, et al.
Much of recent machine learning has focused on deep learning, in which neural network weights are trained through variants of stochastic gradient descent. An alternative approach comes from the field of neuroevolution, which harnesses evolutionary algorithms to optimize neural networks, inspired by the fact that natural brains themselves are the products of an evolutionary process. Neuroevolution enables important capabilities that are typically unavailable to gradient-based approaches, including learning neural network building blocks, hyperparameters, architectures and algorithms for learning itself. Neuroevolution differs deep reinforcement learning via a population of solutions during search, enabling exploration and parallelization. This Review looks at several key aspects of modern neuroevolution, including large-scale computing, the benefits of novelty and diversity, the power of indirect encoding, and the field’s contributions to meta-learning and architecture search. (Abstract excerpt)
Taylor, P., et al. The Global Landscape of Cognition: Hierarchical Aggregation as an Organizational Principle of Human Cortical Networks and Functions. Nature Scientific Reports. 5/18112, 2019. As the deep neural network revolution began via theory and neuroimaging, UM Amherst neuroscientists including Hava Siegelmann attest to a nested connectome architecture which then serves cognitive achievements. On page 15, a graphic pyramid rises from a somatosensory, prosodic base through five stages to reason, language, visual concepts. Might one now imagine this scale as a personal ontogeny recap of life’s evolutionary sapient awakening? See Deep Neural Networks Abstract like Humans by Alex Gain and Hava Siegelmann at arXiv:1905.11515 for a 2019 version.
Tibbetts, John. The Frontiers of Artificial Intelligence. BioScience. 68/1, 2018. A science writer provides a good survey of how deep learning AI capabilities are lately being availed to much benefit worldwide in agricultural crop surveys, medical diagnostic image analysis, flora and fauna conservation, and more. Of course we need be wary and careful, but ought to appreciate its many advantages.
Vaidya, Satyarth, et al. Brief Review of Computational Intelligence Algorithms. arXiv:1901.00983. Birla Institute of Technology and Science, Pilani Campus, Dubai computer scientists survey a wide array of brain-based and indigenous algorithmic methods, along with showing how they are finding service in far afield domains from geology to cerebral phenomena.
Computational Intelligence algorithms have been found to deliver near optimal solutions. In this paper we propose a new hierarchy which classifies algorithms based on their sources of inspiration. The algorithms have two broad domains namely modeling of human mind and nature inspired intelligence. Algorithms of Modeling of human mind take their motivation from the manner in which humans perceive and deal with information. Similarly algorithms of nature inspired intelligence are based on ordinary phenomenon occurring in nature. The latter has further been broken into swarm intelligence, geosciences and artificial immune system. (Abstract)
Wason, Ritika. Deep Learning: Evolution and Expansion. Cognitive Systems Research. 52701, 2018. A Bharati Vidyapeeth’s Institute of Computer Applications and Management, New Delhi professor of computer science provides a wide-ranging survey of this neural net based method since the 1980s by way of citing over 50 worldwide approaches to this day.
Yound, Tom, et al. Recent Trends in Deep Learning Based Natural language Processing. IEEE Computational Intelligence Magazine. 13/3, 2918. Beijing Institute of Technology and Nanyang Technological University, Singapore computer scientists present a review tutorial about the state of the fruitful avail of neural net methods to parse linguistic textual writings. An affinity between recurrent networks and recursive script and also speech appears to be innately evident. Another commonality is that both cerebral and corpora modes are involved with prior memories of information and knowledge. See also Identifying DNA Methylation Modules Associated with Cancer by Probabilistic Evolutionary Learning in this issue, and earlier A Primer on Neural Network Models for Natural Language Processing by Yoav Goldberg, Yoav in the Journal of Artificial Intelligence Research (57/345, 2016).
Deep learning methods employ multiple processing layers to learn hierarchical representations of data, and have produced state-of-the-art results in many domains. Recently, a variety of model designs and methods have blossomed in the context of natural language processing (NLP). In this paper, we review significant deep learning related models and methods that have been employed for numerous NLP tasks and provide a walk-through of their evolution. We also summarize, compare and contrast the various models and put forward a detailed understanding of the past, present and future of deep learning in NLP. (Abstract)
Yue, Tianwei and Haohan Wang. Deep Learning for Genomics: A Concise Overview. arXiv:1802.00810. Xi’an Jiaotong University and Carnegie Mellon University scientists post an invited chapter for the 2018 Springer edition Handbook of Deep Learning Applications. We record as an example of how so many natural and social realms are now being treated by way of an organic neural network-like cognitive process.
Advancements in genomic research such as high-throughput sequencing techniques have driven modern genomic studies into "big data" disciplines. This data explosion is constantly challenging conventional methods used in genomics. In parallel with the urgent demand for robust algorithms, deep learning has succeeded in a variety of fields such as vision, speech, and text processing. Yet genomics entails unique challenges since we are expecting from deep learning a superhuman intelligence that explores beyond our knowledge to interpret the genome. In this paper, we briefly discuss the strengths of different models from a genomic perspective so as to fit each particular task with a proper deep architecture. (Abstract excerpts)
Zhu, Jun, et al. Big Learning with Bayesian Methods. National Science Review. Online May, 2017. In this Oxford Academic journal of technical advances from China, as their AI programs intensify State Key Lab for Intelligent Technology and Systems, Tsinghua University, computer scientists consider this iterative method of winnowing uncertainties and probabilities, often with massive data input, so as to reach sufficiently credible answers unto knowledge.
The explosive growth in data volume and the availability of cheap computing resources have sparked increasing interest in Big learning, an emerging subfield that studies scalable machine learning algorithms, systems and applications with Big Data. Bayesian methods represent one important class of statistical methods for machine learning, with substantial recent developments on adaptive, flexible and scalable Bayesian learning. This article provides a survey of the recent advances in Big learning with Bayesian methods, termed Big Bayesian Learning, including non-parametric Bayesian methods for adaptively inferring model complexity, regularized Bayesian inference for improving the flexibility via posterior regularization, and scalable algorithms and systems based on stochastic subsampling and distributed computing for dealing with large-scale applications. We also provide various new perspectives on the large-scale Bayesian modeling and inference. (Abstract)