(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

VII. Our Earthuman Ascent: A Major Evolutionary Transition in Individuality

2. Systems Neuroscience: Multiplex Networks and Critical Function

Rizzolatti, Gaicomo, et al. Mirrors in the Mind. Scientific American. November, 2006. With co-authors are Leonardo Fogassi and Vittorio Gallese, all from the Department of Neuroscience, University of Padua, a popular introduction to the discovery of mirror neurons in the brain, which are activated either when a person is performing an action, or observing another doing the same. Their importance is just beginning to be appreciated for the evolution and enhancement of primate and human sociality, along with language development and other psychological advances. By this attribute, human persons are inherently wired for and linked in social behavior. A deficit or absence of this capability may then be a cause of autism.

Further publications and resources can be accessed by searching Google for the author’s name, which reaches their website, or for the phrase ‘mirror neuron.’ A technical source is Rizzolatti, G. and L. Craighero. The Mirror-Neuron System. Annual Review of Neuroscience. 27/169, 2004. Such an attribute is also noted in psychologist Daniel Goleman’s new book Social Intelligence: The New Science of Human Relationships. (New York: Bantam Books, 2006).

Rockwell, W. Teed. Neither Brain nor Ghost. Cambridge: MIT Press, 2005. An attempt to move beyond the Cartesian duality of matter and mind via connectionism and dynamic systems theory.

Rubinov, Mikail, et al. Symbiotic Relationship between Brain Structure and Dynamics. BMC Neuroscience. 10/55, 2009. In this British online journal, an international team from Australia, Japan and the United States, including Olaf Sporns, provide a summary to date of the worldwide nonlinear revolution as collaborative humankinder retrospectively quantifies the personal human brain anatomy, physiology, and function from whom it arose.

Brain structure and dynamics are interdependent through processes such as activity-dependent neuroplasticity. In this study, we aim to theoretically examine this interdependence in a model of spontaneous cortical activity. To this end, we simulate spontaneous brain dynamics on structural connectivity networks, using coupled nonlinear maps. On slow time scales structural connectivity is gradually adjusted towards the resulting functional patterns via an unsupervised, activity-dependent rewiring rule. The present model has been previously shown to generate cortical-like, modular small-world structural topology from initially random connectivity. (Background) Our results outline a theoretical mechanism by which brain dynamics may facilitate neuroanatomical self-organization. We find time scale dependent differences between structural and functional networks. These differences are likely to arise from the distinct dynamics of central structural nodes. (Conclusion)

Modular small-world network topology may represent a basic organizational principle of neuroanatomical connectivity across multiple spatial scales [1-6]. Small-world networks are clustered (like ordered networks), and efficiently interconnected (like random networks) [1]. Modular networks are characterized by the presence of highly interconnected groups of nodes (modules) [7]. Hence a modular small-world connectivity reconciles the opposing demands of segregation and integration of functionally specialized brain areas [8] in the face of spatial wiring constraints [9]. (2)

Sanborn, Adam and Nick Chater. Bayesian Brains without Probabilities. Trends in Cognitive Sciences. Online March, 2017. University of Warwick and Warwick Business School behavioral neuroscientists finesse this popular turn to explain cognitive behavior as iterative process of likely probabilities. Rather than just better guesses, our cerebrations are seen to repeatedly survey an array of candidate or sample options, from which choices are made.

Bayesian explanations have swept through cognitive science over the past two decades, from intuitive physics and causal learning, to perception, motor control and language. Yet people flounder with even the simplest probability questions. What explains this apparent paradox? How can a supposedly Bayesian brain reason so poorly with probabilities? In this paper, we propose a direct and perhaps unexpected answer: that Bayesian brains need not represent or calculate probabilities at all and are, indeed, poorly adapted to do so. Instead, the brain is a Bayesian sampler. Only with infinite samples does a Bayesian sampler conform to the laws of probability; with finite samples it systematically generates classic probabilistic reasoning errors, including the unpacking effect, base-rate neglect, and the conjunction fallacy. (Abstract)

Bayesian sampler: an approximation to a Bayesian model that uses a sampling algorithm such as MCMC to avoid intractable integrals. While the model is used to perform Bayesian inference, the sampling algorithm itself is simply a mechanism for producing samples. Deep belief network: a hierarchical artificial neural network of binary variables. Each layer of the network can be composed on simpler networks such as Boltzmann machines. Markov chain Monte Carlo: a family of algorithms for drawing samples from probability distributions. These algorithms transition from state to state with probabilities that depend only on the current state. The transition probabilities are carefully chosen so that the states are (dependent) samples of a target probability distribution. (Glossary)

Saxe, Andrew, et al. A Mathematical Theory of Semantic Development in Deep Neural Networks. Proceedings of the National Academy of Sciences.. 116/11537, 2019. In a highly technical article, AS, Oxford University, James McClelland, Stanford University (original developer with David Rumelhart of Parallel Distributed Processing in the 1980s), and Surya Ganguli, Google Brain, CA advance this machine to brain revolution so as to better organize and encode knowledge by means of typicality and category coherence, optimal learning, invariant similarities and more. See also Evolution of Scientific Networks in Biomedical Texts at arXiv:1810.10534 and Human Information Processing in Complex Networks at 1906.00926.

An extensive body of empirical research has revealed remarkable regularities in the acquisition, organization, deployment, and neural representation of human semantic knowledge. These results raise a fundamental question: what are the principles governing the ability of neural networks to acquire, organize, and deploy abstract knowledge? We address this by analyzing the nonlinear dynamics of learning in deep linear networks. We find solutions to these learning dynamics that explain disparate phenomena in semantic cognition such as the hierarchical differentiation of concepts through developmental transitions, the ubiquity of semantic illusions between transitions, the emergence of category coherence which controls the speed of semantic processing, and the conservation of semantic similarity in neural representations across species. Our simple neural model can thus recapitulate diverse regularities underlying semantic development, while providing insight into how the statistical structure of an environment can interact with nonlinear deep learning dynamics results in these regularities. (Abstract edits)

Schmidhuber, Jurgen. Deep Learning in Neural Networks: An Overview. Neural Networks. 61/2, 2015. A technical tutorial by the University of Lugano, Switzerland expert upon advances in artificial or machine learning techniques, based on how our own brains think. Sophisticated algorithms, multiple processing layers with complex structures, assignment paths, non-linear transformations, and so on are at work as they refer new experiences to prior representations for comparison. See also, for example, Semantics, Representations and Grammars for Deep Learning by David Balduzzi at arXiv:1509.08627. Our interest recalls recent proposals by Richard Watson, Eors Szathamary, et al to appreciate life’s evolution as quite akin to a neural net, connectionist learning process.

Sendhoff, Bernhard, et al, eds. Creating Brain-Like Intelligence from Basic Principles to Complex Intelligent Systems. Berlin: Springer, 2009. (Lecture Notes in Artificial Intelligence LNAI 5436) Sendhoff and co-editors Olaf Sporns and Edgar Korner lead off with a chapter on “From Complex Networks to Intelligent Systems.” The work is a mature example of how cerebral and cognitive studies have morphed to this dynamical approach, just as systems biology/genetics has done. From the quotes, please note the same ubuntu, creative union of semi-autonomy and integration as everywhere else, whose ubiquity quite implies, and springs from a common, mathematical source.

The accumulation of ever more detailed biological, cognitive and psychological data cannot substitute for general principles that underlie the emergence of intelligence. It is our belief that we have to more intensively pursue research approaches that aim at a holistic and embedded view of intelligence from many different disciplines and viewpoints. (4) The aim of theoretical neuroscience is to understand the general principles behind the organization and operation of nervous systems. (4)

The brain is a complex system because it consists of numerous elements that are organized into structural and functional networks which in turn are embedded in a behavior and adapting organism. Brain anatomy has long attempted to chart the connection patterns of complex nervous systems, but only recently, with the arrival of modern network analysis tools, have we been able to discern principles of organization within structural brain networks. One of the overarching structural motifs points to the existence of segregated communities (modules) of brain regions that are functionally similar within each module and less similar between modules. (5)

Seung, Sebastian. Connectome: How the Brain’s Wiring Makes us Who We Are. Boston: Houghton Mifflin Harcourt, 2012. A MIT computational neuroscientist provides an accessible entry to imaginations and expansions of everything neural and cognitive in a similar genre to genome networks. Main sections of Connectionism, Nature and Nurture, Connectomics, and Beyond Humanity, well cover these frontiers. In closing, a "transhumanism" is proposed that would implement these advances as a way to recover meaningful lives now impoverished by Stephen Weinberg’s “pointless” science.

In the same way, a connectome is the totality of connections between the neurons in a nervous system. The term, like genome, implies completeness. A connectome is Not one connection, or even many. It is all of them. (xiii)

The Bible said that God made man in his own image. The German philosopher Ludwig Feuerbach said that man made God in his own image. The transhumanists say that humanity will make itself into God. (273)

Shanahan, Murray. Embodiment and the Inner Life: Cognition and Consciousness in the Space of Possible Minds. Oxford: Oxford University Press, 2010. As neuroscience advances and refines its understandings of a non-linear, dynamic brain architecture and function, anatomy and action, by way of self-organizing, complex systems, an Imperial College, London, professor of cognitive robotics here provides, with some density, one of the most thorough, intriguing summaries so far. A prime source availed, which has gained currency, is the global workspace theory of Bernard Baars. And of special interest is still another recognition, e.g. with Merlin Donald and Paul Expert, as everywhere else in nature, of a reciprocal balance of local autonomous segregation and whole brain integration.

To complement these ideas from the theory of networks, dynamical systems theorists have furnished the mathematicians, physicists, and computer scientists of the early 21st century with a splendid collection of conceptual exotica – metastability, chaos, chaotic itinerancy, self-organized criticality, complexity, and the balance of integration and segregation. (4) One major thesis of the book, then, is that the connectivity and dynamics of the global neuronal workspace underwrite cognitive prowess. (5) A further major thesis of the book is that the inner life of a human being arises from the combination of a global neuronal workspace with such an internal sensorimotor loop. (5)

A further refinement of modular structure allows for hierarchical organization. In a hierarchically modular network, hub nodes within a module – that is to say nodes that join sub-modules to each other – are known as provincial hubs. (122) The story is the same if additional levels feature. (122) The balance of integrated and segregated activity should be apparent at every level of organization.

Processes at the lowest level of organization are the nodes of our network, so they behave independently by definition. (124) A system is dynamically integrated when the activity of its parts is influenced by the activity of the whole, and it is dynamically complex when this influence is not too great, when the activity of its parts is not dictated by the activity of the whole. (146)

Shin, Chang-Woo and Seunghwan Kim. Self-organized Criticality and Scale-free Properties in Emergent Functional Neural Networks. www.arxiv.org/abs/cond-mat/0408700. Published November 9, 2004 online as Condensed Matter 0408700, the paper argues that rather than learning by Hebbian rules, the brain acts as a dynamic nonlinear system.

We show that the functional structures in the brain are self-organized to both the small-world and the scale-free networks by synaptic re-organization by the spike timing dependent synaptic plasticity.

Singh, Soibam, et al. Scaling in Topological Properties of Brain Networks. Nature Scientific Reports. 6/24926, 2016. This work at the frontiers of systems neuroscience by an eight person team based at Jawaharlal Nehru University and McGill University can exemplify, as the Abstract conveys, how much nature’s universal dynamic lineaments are present in our cerebral crown.

The organization in brain networks shows highly modular features with weak inter-modular interaction. The topology of the networks involves emergence of modules and sub-modules at different levels of constitution governed by fractal laws that are signatures of self-organization in complex networks. The modular organization, in terms of modular mass, inter-modular, and intra-modular interaction, also obeys fractal nature. The parameters which characterize topological properties of brain networks follow one parameter scaling theory in all levels of network structure, which reveals the self-similar rules governing the network structure. Further, the calculated fractal dimensions of brain networks of different species are found to decrease when one goes from lower to higher level species which implicates the more ordered and self-organized topography at higher level species. The sparsely distributed hubs in brain networks may be most influencing nodes but their absence may not cause network breakdown, and centrality parameters characterizing them also follow one parameter scaling law indicating self-similar roles of these hubs at different levels of organization in brain networks. The local-community-paradigm decomposition plot and calculated local-community-paradigm-correlation co-efficient of brain networks also shows the evidence for self-organization in these networks. (Abstract)

Sizemore, Ann, et al. The Importance of the Whole: Topological Data Analysis for the Network Neuroscientist. Network Neuroscience. 3/3, 2019. In this special geometry issue, University of Pennsylvania researchers including Danielle Bassett provide a tutorial review of understandings about how our bicameral brains are graced by a dynamic array of multiplex webworks. The presence of an algebraic topology and a persistent homology, aka homological algebra, is seen to provide mathematical explanations. Simplical complexes are also identified as they serve to organize and inform. Some eight decades after C. S. Sherrington famous enchanted loom metaphor, the field of brain studies has finally reached a full quantification. See also Topological Gene Expression Networks Recapitulate Brain Anatomy and Function by Alice Patania, et al, and Columnar Connectome by Ana Wang Roe in the same issue.

Data analysis techniques have fundamentally improved our understanding of neural systems and the complex behaviors they support. Yet the restriction of network techniques to pairwise interactions does not take into account intrinsic topological features that are crucial for system function. To detect and quantify these topological features, we turn to algebro-topological methods that encode data as a simplicial complex built from sets of interacting nodes called simplices. We also provide an introduction to persistent homology that builds a global descriptor of system structure. We detail the mathematics and perform demonstrative calculations on the mouse structural connectome, synapses in C. elegans, and genomic interaction data. (Abstract excerpt)

[Prev Pages]   Previous   | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17  Next