VII. Our Earthuman Ascent: A Major Evolutionary Transition in Individuality
2. Systems Neuroscience: Multiplex Networks and Critical Function
Schmidhuber, Jurgen. Deep Learning in Neural Networks: An Overview. Neural Networks. 61/2, 2015. A technical tutorial by the University of Lugano, Switzerland expert upon advances in artificial or machine learning techniques, based on how our own brains think. Sophisticated algorithms, multiple processing layers with complex structures, assignment paths, non-linear transformations, and so on are at work as they refer new experiences to prior representations for comparison. See also, for example, Semantics, Representations and Grammars for Deep Learning by David Balduzzi at arXiv:1509.08627. Our interest recalls recent proposals by Richard Watson, Eors Szathamary, et al to appreciate life’s evolution as quite akin to a neural net, connectionist learning process.
Sendhoff, Bernhard, et al, eds. Creating Brain-Like Intelligence from Basic Principles to Complex Intelligent Systems. Berlin: Springer, 2009. (Lecture Notes in Artificial Intelligence LNAI 5436) Sendhoff and co-editors Olaf Sporns and Edgar Korner lead off with a chapter on “From Complex Networks to Intelligent Systems.” The work is a mature example of how cerebral and cognitive studies have morphed to this dynamical approach, just as systems biology/genetics has done. From the quotes, please note the same ubuntu, creative union of semi-autonomy and integration as everywhere else, whose ubiquity quite implies, and springs from a common, mathematical source.
The accumulation of ever more detailed biological, cognitive and psychological data cannot substitute for general principles that underlie the emergence of intelligence. It is our belief that we have to more intensively pursue research approaches that aim at a holistic and embedded view of intelligence from many different disciplines and viewpoints. (4) The aim of theoretical neuroscience is to understand the general principles behind the organization and operation of nervous systems. (4)
Seung, Sebastian. Connectome: How the Brain’s Wiring Makes us Who We Are. Boston: Houghton Mifflin Harcourt, 2012. A MIT computational neuroscientist provides an accessible entry to imaginations and expansions of everything neural and cognitive in a similar genre to genome networks. Main sections of Connectionism, Nature and Nurture, Connectomics, and Beyond Humanity, well cover these frontiers. In closing, a "transhumanism" is proposed that would implement these advances as a way to recover meaningful lives now impoverished by Stephen Weinberg’s “pointless” science.
In the same way, a connectome is the totality of connections between the neurons in a nervous system. The term, like genome, implies completeness. A connectome is Not one connection, or even many. It is all of them. (xiii)
Shanahan, Murray. Embodiment and the Inner Life: Cognition and Consciousness in the Space of Possible Minds. Oxford: Oxford University Press, 2010. As neuroscience advances and refines its understandings of a non-linear, dynamic brain architecture and function, anatomy and action, by way of self-organizing, complex systems, an Imperial College, London, professor of cognitive robotics here provides, with some density, one of the most thorough, intriguing summaries so far. A prime source availed, which has gained currency, is the global workspace theory of Bernard Baars. And of special interest is still another recognition, e.g. with Merlin Donald and Paul Expert, as everywhere else in nature, of a reciprocal balance of local autonomous segregation and whole brain integration.
To complement these ideas from the theory of networks, dynamical systems theorists have furnished the mathematicians, physicists, and computer scientists of the early 21st century with a splendid collection of conceptual exotica – metastability, chaos, chaotic itinerancy, self-organized criticality, complexity, and the balance of integration and segregation. (4) One major thesis of the book, then, is that the connectivity and dynamics of the global neuronal workspace underwrite cognitive prowess. (5) A further major thesis of the book is that the inner life of a human being arises from the combination of a global neuronal workspace with such an internal sensorimotor loop. (5)
Shin, Chang-Woo and Seunghwan Kim. Self-organized Criticality and Scale-free Properties in Emergent Functional Neural Networks. www.arxiv.org/abs/cond-mat/0408700. Published November 9, 2004 online as Condensed Matter 0408700, the paper argues that rather than learning by Hebbian rules, the brain acts as a dynamic nonlinear system.
We show that the functional structures in the brain are self-organized to both the small-world and the scale-free networks by synaptic re-organization by the spike timing dependent synaptic plasticity.
Singh, Soibam, et al. Scaling in Topological Properties of Brain Networks. Nature Scientific Reports. 6/24926, 2016. This work at the frontiers of systems neuroscience by an eight person team based at Jawaharlal Nehru University and McGill University can exemplify, as the Abstract conveys, how much nature’s universal dynamic lineaments are present in our cerebral crown.
The organization in brain networks shows highly modular features with weak inter-modular interaction. The topology of the networks involves emergence of modules and sub-modules at different levels of constitution governed by fractal laws that are signatures of self-organization in complex networks. The modular organization, in terms of modular mass, inter-modular, and intra-modular interaction, also obeys fractal nature. The parameters which characterize topological properties of brain networks follow one parameter scaling theory in all levels of network structure, which reveals the self-similar rules governing the network structure. Further, the calculated fractal dimensions of brain networks of different species are found to decrease when one goes from lower to higher level species which implicates the more ordered and self-organized topography at higher level species. The sparsely distributed hubs in brain networks may be most influencing nodes but their absence may not cause network breakdown, and centrality parameters characterizing them also follow one parameter scaling law indicating self-similar roles of these hubs at different levels of organization in brain networks. The local-community-paradigm decomposition plot and calculated local-community-paradigm-correlation co-efficient of brain networks also shows the evidence for self-organization in these networks. (Abstract)
Sizemore, Ann, et al. The Importance of the Whole: Topological Data Analysis for the Network Neuroscientist. Network Neuroscience. 3/3, 2019. In this special geometry issue, University of Pennsylvania researchers including Danielle Bassett provide a tutorial review of understandings about how our bicameral brains are graced by a dynamic array of multiplex webworks. The presence of an algebraic topology and a persistent homology, aka homological algebra, is seen to provide mathematical explanations. Simplical complexes are also identified as they serve to organize and inform. Some eight decades after C. S. Sherrington famous enchanted loom metaphor, the field of brain studies has finally reached a full quantification. See also Topological Gene Expression Networks Recapitulate Brain Anatomy and Function by Alice Patania, et al, and Columnar Connectome by Ana Wang Roe in the same issue.
Data analysis techniques have fundamentally improved our understanding of neural systems and the complex behaviors they support. Yet the restriction of network techniques to pairwise interactions does not take into account intrinsic topological features that are crucial for system function. To detect and quantify these topological features, we turn to algebro-topological methods that encode data as a simplicial complex built from sets of interacting nodes called simplices. We also provide an introduction to persistent homology that builds a global descriptor of system structure. We detail the mathematics and perform demonstrative calculations on the mouse structural connectome, synapses in C. elegans, and genomic interaction data. (Abstract excerpt)
Sperber, Dan. In Defense of Massive Modularity. Dupoux, E., ed. Language, Brain and Cognitive Development. Cambridge: MIT Press, 2001. Sperber wades into the debate on the side of a specialized, modular brain. As an observation, it would be expected from the recurrent course of self-organization that semi-autonomous modules similarly occur in cerebral form and function.
Spitzer, Manfred. The Mind within the Net. Cambridge: MIT Press, 1999. A good introduction to self-organizing neural networks. As so composed, the human brain engages in information processing and most of all pattern recognition. This function has mostly been excluded in mechanistic science because in its left-hemisphere emphasis, integrative patterns are not expected, so are not seen.
Spivey, Michael. The Continuity of Mind. Oxford: Oxford University Press, 2007. A Cornell University neuroscientist proposes that the limited computational model be set aside in favor of how dynamic system trajectories infuse our continually flowing cerebral activities.
The cognitive and neural sciences have been on the brink of a paradigm shift for over a decade. The traditional information-processing framework in psychology, with its computer metaphor of the mind, is still considered to be the mainstream approach. However, the dynamical systems perspectives on mental activity are now receiving a more rigorous treatment, allowing it to more beyond trendy buzzwords. The Continuity of the Mind will help to galvanize the forces of dynamical systems theory, cognitive and computational neuroscience, connectionism, and ecological psychology that are needed to complete this paradigm shift. (book jacket)
Sporns, Olaf. Brain Networks and Embodiment. Mesquita, Batja, et al, eds. The Mind in Context. New York: Guilford Press, 2010. An exemplary paper to date by the Indiana University systems neuroscientist of how everything neural and cognitive is being reconceived in terms of a similar self-organizing, scalar, systemic complexities. An aspect to be noted, as per the quote, is another instance of a mutual reciprocity of divergence and convergence, diversity and unity.
In summary, network interactions can be formally described by using concepts from statistical information theory, for example, mutual information, integration, and complexity. Some of these measures allow us to characterize statistical integrations in a network as a whole. When structural connections are arranged in such a way as to maximize some of these informational quantities, it appears that complexity is uniquely associated with structural patterns that resemble those of brain networks. This result is consistent with the theoretical idea that brain networks balance segregation and integration, which we defined as complexity. High complexity allows networks to integrate efficiently large amounts of information, a capacity that has been linked to consciousness. (54)
Sporns, Olaf. Graph Theory Methods: Applications in Brain Networks. Dialogues in Clinical Neuroscience. 20/2, 2018. The Indiana University neuropsychologist (search) is a leading theorist in this enchanted field as it weaves through the 2010s toward epic achievements. This paper is notably cited as a basis for Max Bertolero and Danielle Bassett’s Scientific American (July 2019) popular review (above). As many other realms, mathematic findings of equally real interconnections between previously found discrete objects and entities are fostering a relational revolution from particles and galaxies to persons and societies. See also The Diverse Club by Max Bertolero, et al in Nature Communications (8/1277, 2017).
Network neuroscience is a thriving and rapidly expanding field. Empirical data on brain networks, from molecular to behavioral scales, are increasing in size and complexity. These developments require appropriate tools and methods that model and analyze brain network data, such as those provided by graph theory. This brief review surveys commonly used and neurobiologically apt graph measures and techniques. Among these, the detection of network communities or modules, and of central network elements that facilitate communication and signal transfer are particularly salient. We note a growing use of generative models, temporal and multilayer networks, as well as algebraic topology. (Abstract excerpt)