(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

II. Planetary Prodigy: A Global Sapiensphere Learns by Her/His Self

1. Earthificial Intelligence: Deep Neural Network Learning

Hassabis, Demis, et al. Neuroscience-Inspired Artificial Intelligence. Neuron. 95/2, 2017. We note this entry because the lead author is the 2010 founder of DeepMind, a premier AI enterprise based in London, which was purchased in 2014 by Google for over $500 million. It is a broad survey of the past, present, and future of this brain-based endeavor guided by advances in how cerebral network dynamics are composed, think, and actively learn.

The successful transfer of insights gained from neuroscience to the development of AI algorithms is critically dependent on the interaction between researchers working in both these fields, with insights often developing through a continual handing back and forth of ideas between fields. In the future, we hope that greater collaboration between researchers in neuroscience and AI, and the identification of a common language between the two fields, will permit a virtuous circle whereby research is accelerated through shared theoretical insights and common empirical advances. We believe that the quest to develop AI will ultimately also lead to a better understanding of our own minds and thought processes. (Conclusion, 255)

Hayasaki, Erika. Women vs. the Machine. Foreign Policy. Jan/Feb, 2017. The UC Irvine professor of literary journalism and author insightfully reveals a pervasive male bias throughout the text and symbolism of websites, which is then seems to carried on into the rushed AI take over.

In the not-so-distant future, artificial intelligence will be smarter than humans. But as the technology develops, absorbing cultural norms from its creators and the internet, it will also become more intolerant, racist, and sexist.

Jones, Nicola. The Learning Machines. Nature. 505/146, 2014. An excellent report about how the artificial intelligence endeavor, after many fitful years is lately aided by big data and cloud prowess so as to attain a mature capabilities. (Mindkind)

Kim, Edward and Robert Brunner. Star-Galaxy Classification Using Deep Convolutional Neural Networks. arXiv:1608.04369. We cite this entry by a University of Illinois physicist and an astronomer to show how an international collaborative community is uses cerebral methods to analyze vast amounts of data. In regard, one might assume a nascent global brain learning on its own. See also Deep Recurrent Neural Networks for Supernovae Classification by Tom Charnock and Adam Moss at 1606.07442 for another example. (Spiral Science)

Knight, Will. The Dark Secret at the Heart of AI. MIT Technology Review. 120/3, 2017. A senior editor worries that No one knows how the most advanced algorithms do what they do. In this section, we also want to record such reality checks so that as this computational prowess burst upon us, it remains within human control and service, rather than taking over. See also herein Women vs. the Machine by Erika Hayasaki for another dilemma.

Kozma, Robert, et al. Artificial Intelligence in the Age of Neural Networks and Brain Computing. Cambridge, MA: Academic Press, 2018. A large international edition with 15 authoritative chapters from The New AI: Basic Concepts and Urgent Risks to Evolving Deep Neural Networks. See especially A Half Century of Progress toward a Unified Neural Theory of Mind and Brain by Stephen Grossberg (search).

Kriegeskorte, Nikolaus and Tai Golan. Neural Network Models and Deep Learning: A Primer for Biologists. arXiv:1902.04704. Columbia University neuroscientists provide a 14 page primer which would be a good entry for any field. Some sections are Neural nets are universal approximators, Deep networks can capture complex functions, and Deep learning by backpropagation.

Originally inspired by neurobiology, deep neural network models have become a powerful tool of machine learning and artificial intelligence, where they are used to approximate functions and dynamics by learning from examples. Here we give a brief introduction to neural network models and deep learning for biologists. We introduce feedforward and recurrent networks and explain the expressive power of this modeling framework and the backpropagation algorithm for setting the parameters. Finally, we consider how deep neural networks might help us understand the brain's computations. (Abstract)

Levine, Yoav, et al. Deep Learning and Quantum Entanglement: A Fundamental Bridge. arXiv:1704.01552. Along with other entries (Beny, Golkov), Hebrew University of Jerusalem including Amnon Shashua, plumb the physical depths of this natural, informative synthesis across the physical cosmos to its cerebral emergence. (Over this stretch, might we imagine ourselves as a genesis universe’s way of attaining its own self-cognizance, and continuance?) See also Quantum Entanglement in Neural Network States at arXiv:1701.04844.

Deep convolutional networks have witnessed unprecedented success in various machine learning applications. Formal understanding on what makes these networks so successful is gradually unfolding, but for the most part there are still significant mysteries to unravel. The inductive bias, which reflects prior knowledge embedded in the network architecture, is one of them. In this work, we establish a fundamental connection between the fields of quantum physics and deep learning. We use this connection for asserting novel theoretical observations regarding the role that the number of channels in each layer of the convolutional network fulfills in the overall inductive bias. Specifically, we show an equivalence between the function realized by a deep convolutional arithmetic circuit and a quantum many-body wave function, which relies on their common underlying tensorial structure. This facilitates the use of quantum entanglement measures as well-defined quantifiers of a deep network's expressive ability to model intricate correlation structures of its inputs. (Abstract excerpt)

Lin, Henry and Max Tegmark. Why does Deep and Cheap Learning Work so Well?. arXiv:1608.08225. The Harvard and MIT polymaths review the recent successes of these neural net, multiscale, algorithmic operations (definitions vary) from a statistical physics context such as renormalization groups and symmetric topologies. (Intelligent Evolution)

Liu, Weibo, et al. A Survey of Deep Neural Network Architectures and their Applications. Neurocomputing. 234/11, 2017. As the Abstract cites, Brunel University, London, Xiamen University, Yangzhou University, and King Abdulaziz University, Jeddah, computer engineers provide a wide-ranging tutorial on these increasingly useful cognitive methods.

Since the proposal of a fast learning algorithm for deep belief networks in 2006, the deep learning techniques have drawn ever-increasing research interests because of their inherent capability of overcoming the drawback of traditional algorithms dependent on hand-designed features. Deep learning approaches have also been found to be suitable for big data analysis with successful applications to computer vision, pattern recognition, speech recognition, natural language processing, and recommendation systems. In this paper, we discuss some widely-used deep learning architectures and their practical applications. An up-to-date overview is provided on four deep learning architectures, namely, autoencoder, convolutional neural network, deep belief network, and restricted Boltzmann machine. Different types of deep neural networks are surveyed and recent progresses are summarized. (Abstract)

Lucie-Smith, Luisa, et al. Machine Learning Cosmological Structure Formation. arXiv:1802.04271. We cite this entry by University College London astrophysicists including Hiranya Peiris as an example of the widest range that a new cerebral-based artificial intelligence methods can be applied. If to reflect, whom is this person/sapiensphere prodigy to so proceed as the universe’s way of achieving its own self-quantified description?

Marchetti, Tomasso, et al. An Artificial Neural Network to Discovery Hypervelocity Stars. arXiv:1704.07990. An eight member European astrophysicist team finds this cerebral procedure to be a fruitful way to distill results of the myriad data findings of the Gaia space telescope mission. Once again, we note how such a collaboration may appear as a worldwide sapiensphere proceeding to learn on her/his own.

The paucity of hypervelocity stars (HVSs) known to date has severely hampered their potential to investigate the stellar population of the Galactic Centre and the Galactic Potential. The first Gaia data release gives an opportunity to increase the current sample. The challenge is of course the disparity between the expected number of hypervelocity stars and that of bound background stars (around 1 in 106). We have applied a novel data mining algorithm based on machine learning techniques, an artificial neural network, to the Tycho-Gaia astrometric solution (TGAS) catalogue. (Abstract excerpt)

Previous   1 | 2 | 3 | 4 | 5  Next