(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

VII. Our Earthuman Ascent: A Major Evolutionary Transition in Individuality

2. Systems Neuroscience: Multiplex Networks and Critical Function

Ascoli, Giorgio. Trees of the Brain, Roots of the Mind. Cambridge: MIT Press, 2015. At the frontiers of natural philosophy, a George Mason University, Center for Neural Informatics, Structure and Plasticity neuroscientist surveys the 2010s era of sophisticated 3D cerebral imaging research. By virtue of these capabilities and their findings, a forest analogy is evoked for branching networks of local, arboreal neurons, synapses and axons whose intricate nestedness forms a whole brain anatomy. This vista of myriad nets within Nets leads to novel insights, as the quotes sample. Our neural connectome thus learns, thinks, and responds in a mindful way by necessarily comparing new experience with prior representations contained in the dynamic network tree topology. In a conclusion, this “brainforest” environment is seen as an apt microcosm for a similar encompassing network nature.

We have so far described how the third principle of the brain-mind relationship explains why appropriate knowledge of relevant background information gates the acquisition of new information by one-trial learning. The relationship between axonal-dendritic overlaps (corresponding to the potential to learn) and existing connectivity (representing knowledge) provides a direct neural mechanism for the familiar observation that experts can grasp new concepts in their discipline much faster than novices. Everyone finds it easier to learn new facts in their domains of expertise than in a completely novel field. (108)

Even our incomplete comprehension of neuronal structure and function has allowed us nonetheless to propose three core principles for linking the nervous system with the mind. We hypothesized in the first principle an identification of mental states and patterns of electric activity in the brain. A consequence of this equivalence is that knowledge, the ability to instantiate a given mental stare, is encoded in the connectivity of the network because electric activity in nervous systems flows through connections among neurons. This consideration led to formulation of a structural mechanism for what it means to acquire new knowledge, that is, to learn something. Specifically, in the second principle we equated learning to a change in network connectivity through the creation and removal of synapses. (181)

The third principle, which merely builds on the logic of the first two, is nevertheless the most radical and novel proposition of this book: that the spatial proximity of axons and dendrites, enabling synapse formation and elimination, corresponds to the capability for learning. Such a correspondence has far-reaching implications because it directly ties the branching structure of neurons to the fine line separating nurture from nature. Without the constraint of physical proximity between axons and dendrites, we could be able to learn anything from experience. With this rule in place, in constrast, each of us learns only those aspects of experience that are somehow compatible with our existing knowledge. (182)

Based on the principles exposed in this book, we can attempt to revisit the question of Reality. In their bare computational essence, brains can be viewed as gigantic networks whose sets of connections represent associations of observables learned through experience. We can thus offer a radical perspective of Reality. Reality constitutes an enormous interconnected web of co-occurring events. Each pair of events can be quantitatively expressed in the context of the entire web by the conditional probability representing the information content of their co-occurrence. Every human being (as well as, of course, all other animals and inanimate objects) is immersed in this universal web. From within, each person at any given moment witnesses a small fraction of co-occurring events based on his or her location, time, state of attention, and so forth. Brains evolved as networks (of neurons) themselves in order to represent most effectively the surrounding reality, thereby gaining predictive power and endowing their carriers with survival fitness. Such integration provides a natural conceptual framework to characterize the interactions among the world, the brain, and the mind. (211)

Ascoli, Giorgio and Diek Wheeler. In Search of a Periodic Table of the Neurons: Axonal-Dendritic Circuitry as the Organizing Principle. BioEssays. Online August, 2016. George Mason University computational neuroanatomists lay out a pioneer paper that tries to conceive an intrinsic architectural scale akin to the chemical elements. A longer subtitle is Patterns of axons and dendrites within distinct anatomical parcels provide the blueprint for circuit-based neuronal classification. See also prior work Towards the Automatic Classification of Neurons in Trends in Neuroscience (36/5, 2015) and Name-Calling in the Hippocampus: coming to Terms with Neuron Types and Properties in Brain Informatics (Online June 2016).

No one knows yet how to organize, in a simple yet predictive form, the knowledge concerning the anatomical, biophysical, and molecular properties of neurons that are accumulating in thousands of publications every year. The situation is not dissimilar to the state of Chemistry prior to Mendeleev's tabulation of the elements. We propose that the patterns of presence or absence of axons and dendrites within known anatomical parcels may serve as the key principle to define neuron types. Just as the positions of the elements in the periodic table indicate their potential to combine into molecules, axonal and dendritic distributions provide the blueprint for network connectivity. Furthermore, among the features commonly employed to describe neurons, morphology is considerably robust to experimental conditions. At the same time, this core classification scheme is suitable for aggregating biochemical, physiological, and synaptic information. (Abstract)

Ashourvan, Arian, et al. The Energy Landscape Underpinning Module Dynamics in the Human Brain Connectome. arXiv:1609.01015. From this active University of Pennsylvania neuroscientist team including Danielle Bassett and Marcelo Mattar an entry that considers integral rootings of cerebral cognition into basic physical principles. A companion paper by the group is Brain Network Architecture: Implications for Human Learning at 1609.01790. Along with similar work by Edward Bullmore and Aharon Azulay (search each), a mindful, intelligent natural cosmos which becomes sentient and knowledgeable in its phenomenal human phase is increasingly revealed.

Human brain dynamics can be profitably viewed through the lens of statistical mechanics, where neurophysiological activity evolves around and between local attractors representing preferred mental states. Many physically-inspired models of these dynamics define the state of the brain based on instantaneous measurements of regional activity. Yet, recent work in network neuroscience has provided initial evidence that the brain might also be well-characterized by time-varying states composed of locally coherent activity or functional modules. Here we study this network-based notion of brain state to understand how functional modules dynamically interact with one another to perform cognitive functions. We estimate the functional relationships between regions of interest (ROIs) by fitting a pair-wise maximum entropy model to each ROI's pattern of allegiance to functional modules. These results collectively offer a view of the brain as a dynamical system that transitions between basins of attraction characterized by coherent activity in small groups of brain regions, and that the strength of these attractors depends on the cognitive computations being performed. (Abstract excerpts)

Baronchelli, Andrea, et al. Networks in Cognitive Science. Trends in Cognitive Science. 17/7, 2013. With Ramon Ferrer-i-Cancho, Romualdo Pastor-Satorras, Nick Chater, and Morten Christiansen, a Feature Review with 170 references of this neuroscience revolution from prior computational approaches to a more accurate brain anatomy and activity model by way of complex system dynamics, as they apply to an enhanced neural net connectome.

Networks of interconnected nodes have long played a key role in Cognitive Science, from artificial neural networks to spreading activation models of semantic memory. Recently, however, a new Network Science has been developed, providing insights into the emergence of global, system-scale properties in contexts as diverse as the Internet, metabolic reactions, and collaborations among scientists. Today, the inclusion of network theory into Cognitive Sciences, and the expansion of complex-systems science, promises to significantly change the way in which the organization and dynamics of cognitive and behavioral processes are understood. In this paper, we review recent contributions of network theory at different levels and domains within the Cognitive Sciences. (Abstract)

Bassett, Danielle and Michael Gazzaniga. Understanding Complexity in the Human Brain. Trends in Cognitive Sciences. 15/5, 2011. University of California, Santa Barbara, systems physicist Bassett and renowned neuroscientist Gazzaniga endorse and explain this holistic reconception of the nature of cerebral development and intellectual capacity. In similar fashion to 21st century genomics, brains are suffused by nonlinear dynamic networks, a “connectome” distinguished by modularity, emergence, nested scales, “bidirectional causation and complementarity,” and so on, both in cerebral anatomy and cognitive thought.

Although the ultimate aim of neuroscientific enquiry is to gain an understanding of the brain and how its workings relate to the mind, the majority of current efforts are largely focused on small questions using increasingly detailed data. However, it might be possible to successfully address the larger question of mind–brain mechanisms if the cumulative findings from these neuroscientific studies are coupled with complementary approaches from physics and philosophy. The brain, we argue, can be understood as a complex system or network, in which mental states emerge from the interaction between multiple physical and functional levels. (200)

Bassett, Danielle and Olaf Sporns. Network Neuroscience. Nature Neuroscience. 20/3, 2017. In the past decade, an historic advance in understanding brain architecture and cognition has occurred by way of dynamic, multiplex neuron nodes and linked connections. The University of Pennsylvania and Indiana University authors (search) are major contributors. Here they scope out a retrospect survey, aided by neuroimaging abilities, of the awesome, auspicious cerebral faculty we each possess. Search also Emily Falk, who with DB, go on to show how this true microcosm is gaining a further societal basis.

Despite substantial recent progress, our understanding of the principles and mechanisms underlying complex brain function and cognition remains incomplete. Approaching brain structure and function from an explicitly integrative perspective, network neuroscience pursues new ways to map, record, analyze and model the elements and interactions of neurobiological systems. Two parallel trends drive the approach: the availability of new empirical tools to create comprehensive maps and record dynamic patterns among molecules, neurons, brain areas and social systems; and the theoretical framework and computational tools of modern network science. The convergence of empirical and computational advances opens new frontiers of scientific inquiry, including network dynamics, manipulation and control of brain networks, and integration of network processes across spatiotemporal domains. (Abstract)

Bassett, Danielle, et al. Adaptive Reconfiguration of Fractal Small-world Human Brain Functional Networks. Proceedings of the National Academy of Sciences. 103/19518, 2006. Neuroscientists from the University of Cambridge and the U.S. National Institute of Mental Health verify a network architecture that repeats at every neural scale just as it does at every instance from cosmos to society. These are examples of august yet unassimilated findings because they do not accord with an indifferent physical universe no place or purpose for life and human. One of the five authors is affiliated with “Biological and Soft Systems, Dept. of Physics” at Cambridge. The exemplary paper deserves an extended quote.

Brain function depends on adaptive self-organization of large-scale neural assemblies, but little is known about quantitative network parameters governing these processes in humans. Here, we describe the topology and synchronizability of frequency-specific brain functional networks using wavelet decomposition of magnetoencephalographic time series, followed by construction and analysis of undirected graphs. (19518) Global topological parameters (path length, clustering) were conserved across scales, most consistently in the frequency range 2-37 Hz, implying a scale-invariant or fractal small-world organization. (19518) Human brain functional networks demonstrate a fractal small-world architecture that supports critical dynamics and task-related spatial reconfiguration while preserving global topological parameters. (19518)

Fractal properties, 1.e., self-similarity over several scales of measurement, have been described in many types of neurobiological data including electroencephalographic and functional MRI time series, structural MRI measurements of the cortical surface and gray-white matter boundary, and microscopic data on neuronal dendritic and cerebrovascular arborizations. It is also notable that small-world networks have been generated computationally by a fractal growth process and that adaptive rewiring of initially random networks by neurogenesis may allow development and maintenance of small-world connectivity in the brain. However, the current data provide the first evidence for self-similarity of global topology and dynamics of large-scale brain functional networks over a range of frequencies. (19521)

Bassett, Danielle, et al. Network Models in Neuroscience. arXiv:1807.11935. University of Pennsylvania neuroscientists Bassett, Perry Zurn and Joshua Gold post a chapter under consideration for Cerebral Cortex 3.0 from MIT Press. As the Abstract cites, it covers the past, present and future of this 2000s and 2010s network science revolution. After reviewing general, independent nonlinear principles which occur across every species, life’s distinctive anatomy and physiology is especially exemplified by cerebral structures and cognitive abilities. A graphic display depicts generic node and edge (link), core-periphery, clustering, dynamics, degrees, communities, hubness, and topologies as they form dynamic connectomes from chromatin and neurons to mosaic regions and whole brains regions.

From interacting cellular components to networks of neurons and neural systems, interconnected units comprise a fundamental organizing principle of the nervous system. Understanding how their patterns of connections and interactions give rise to the many functions of the nervous system is a primary goal of neuroscience. Recently, this pursuit has begun to benefit from new mathematical tools that can relate a system's architecture to its dynamics and function. These tools have been used with increasing success to build models of neural systems across spatial scales and species. We begin with a review of model theory from a philosophical perspective of networks as models of complex systems in general, and of the brain in particular. We then summarize the types of models along three primary dimensions: from data representations to first-principles theory, from biophysical realism to functional phenomenology, and from elementary descriptions to coarse-grained approximations. We close with important frontiers in network models and for increasingly complex functions of neural systems. (Abstract edits)

Bassett, Danielle, et al. Reflections on the Past Two Decades of Neuroscience. Nature Reviews Neuroscience. September, 2020. Eighteen researchers with prior papers in this journal including Kathleen Cullen, Martha Farah and Hailan Hu were asked for a retrospective view. We cite from D. Bassett’s response because it alludes to a popular generic method as a two step sequence which first identifies the necessary component pieces, and then the equally real, more important, associations in between, which altogether can be seen to carry informative content.

As a practice in the acquisition and curation of knowledge, neuroscience has expanded upon prior paradigms. Two pioneering paths of formal study now extend: that of neural representations and of network systems. The former considers the pattern of activity across neural units that encode an object, concept or state of information, while the other views the pattern of connections between neural units that can transmit information and the modulate neural networks. The twin paths of representation and transmission move beyond the mapping of a country’s borders, to record a city’s statistics and the transportation networks along which traffic flows. (Bassett)

Baumgarten, Lorenz and Stefan Bornholdt. Critical Excitation-Inhibition Balance in Dense Neural Networks. arXiv:1903.12632. University of Bremen theorists contribute to later 2010s research findings that our (microcosmic) cerebral faculty inherently seeks and performs best at this tamper-down and/or ramp-up emotional and cognitive state. How grand might it then be if nature’s (macroscopic) tendency to reach such an optimum reciprocity could be carried over and availed in social politics whence dual right-conserve and left-create parties would be complementary halves of a whole organic democracy.

The "edge of chaos" phase transition in artificial neural networks is of renewed interest in light of recent evidence for criticality in brain dynamics. A recent study utilizing the excitation-inhibition ratio as the control parameter found a new, nearly degree independent, critical point when neural connectivity is large. However, the new phase transition is accompanied by a high level of activity in the network. Here we study random neural networks with the additional properties of (i) a high clustering coefficient and (ii) neurons that are either excitatory or inhibitory, a prominent property of natural neurons. As a result, we observe an additional critical point for networks with large connectivity which compares well with neuronal brain networks. (Abstract excerpt)

Beer, Randall. Dynamical Approaches to Cognitive Science. Trends in Cognitive Science. 4/3, 2000. A survey and synthesis of computational, connectionist, and complex system properties of neural operation.

Bertolero, Max and Danielle Bassett. How Matter Becomes Mind. Scientific American. July, 2019. It is good sign that a research field has reached a robust, credible stage when an article appears in this popular publication, which is a credit to its University of Pennsylvania network neuroscientist authors and collaborators. It reports upon an array of advances over the past decade that altogether reveal and highlight multiplex connectivities (aka graph theory here) between nodal neurons, layered linkages, and modular communities as they give rise to informed thought and response. We log in this week similar evidence from physics (Nottale, Busch), cancer studies (D. Moore), cellular dynamics (Fuchling) and other areas. As a wealth of citations now convey, an iconic, natural system of infinitely iterated, generative node entities and link relations in a triune whole iconic does really seem to exist.

Previous   1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10  Next  [More Pages]