(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

IV. Ecosmomics: Independent, UniVersal, Complex Network Systems and a Genetic Code-Script Source

3. Iteracy: A Rosetta Ecosmos Textuality

Svirezhev, Yuri. Ecosystem as a Text: Information Analysis of the Global Vegetation Pattern. Ecological Modelling. 174/1-2, 2004. Following the philosopher Ludwig Wittgenstein, who said that anything could be represented as a language and text with proper alphabet and grammar, Svirezhev, an ecologist and linguist at the Potsdam Institute for Climate Impact Research, speculates on how natural biological communities of flora and fauna might then be described in terms of information theory.

Taghipour, Nassim, et al. On Complexity of Persian Orthography. Complex Systems. 25/2, 2016. Iranian mathematicians along with the British computer scientist Andrew Adamatzky (search) proceed a millennium later to parse this literary corpus by way of computational L-systems (second quote) so to reveal the presence of independent, universally manifest, natural patterns. See also herein Text Mining by Tsallis Entropy by Iranian scholars (2017).

To understand how the Persian language developed over time, we uncover the dynamics of complexity of Persian orthography. We represent Persian words by L-systems and calculate complexity measures of these generative systems. The complexity measures include degrees of non-constructability, generative complexity, and morphological richness; the measures are augmented with time series analysis. The measures are used in a comparative analysis of four representative poets: Rudaki (858–940 AD), Rumi (1207–1273), Sohrab (1928–1980), and Yas (1982–present). We find that irregularity of the Persian language, as characterized by the complexity measures of L-systems representing the words, increases over temporal evolution of the language. (Abstract)

An L-system is a parallel rewriting system and a type of formal grammar. An L-system consists of an alphabet of symbols that can be used to make strings, a collection of production rules that expand each symbol into some larger string of symbols, an initial "axiom" string from which to begin construction, and a mechanism for translating the generated strings into geometric structures. L-systems were introduced and developed in 1968 by Aristid Lindenmayer, a Hungarian theoretical biologist and botanist at the University of Utrecht. Who used them to describe the behaviour of plant cells and to model the growth processes of plant development. (Wikipedia)

Takahashi, Shuntaro and Kumiko Tanaka-Ishii. Evaluating Computational Language Models with Scaling Properties of Natural Language. Computational Linguistics. Online July, 2019. University of Tokyo researchers contribute novel perceptions of how corpora and discourse can be seen to exhibit and be treated by the latest complexity sciences. See also Modeling Language Variation and Universals: A Survey on Typological Linguistics for Natural Language Processing by Edoardo Ponti, et al in this journal, June, 2019.

In this article, we evaluate computational models of natural language with respect to the universal statistical behaviors of natural language. Statistical mechanical analyses have revealed that natural language text is characterized by scaling properties, which quantify the global structure in the vocabulary population and the long memory of a text. We study whether five scaling properties (given by Zipf’s law, Heaps’ law, Ebeling’s method, Taylor’s law, and long-range correlation analysis) can serve for evaluation of computational models. Our analysis reveals that language models based on recurrent neural networks (RNNs) with a gating mechanism are the only computational models that can reproduce the long memory behavior of natural language. (Abstract excerpt)

The scaling properties of natural language are the universal statistical behaviors observed in natural language text. For example, Zipf’s law characterizes the vocabulary population with a power-law function for the rank-frequency distribution. Recent statistical mechanical studies revealed another statistical aspect of natural language, long memory. This refers to the way that sequences of characters or words in natural language universally exhibit clustering, bursty behavior. (2)

Takahashi, Takuya and Yasuo Ihara. Quantifying the Spatial Pattern of Dialect Words Spreading from a Central Population. Journal of the Royal Society Interface. July, 2020. We cite this entry by University of Tokyo biolinguists as an example of how network topologies which are applied to brain architectures can also characterize ever-changing linguistic patterns. This iconic method could go onto similar genetic systems, quantum nets, everywhere else. While one might muse over a “book” about natural creation, by virtue of many papers like this, the whole ecosmos could actually appear as a wo/manuscript narrative which we peoples are made and meant to learn to read and write.

Some dialect words are shared among geographically distant groups of people without close interaction. Such a pattern may indicate the current or past presence of a cultural centre exerting a strong influence on peripheries. Here we develop a model of linguistic diffusion within a population network to quantify the distribution of variants created at the central population. Equilibrium distributions of word ages are obtained for idealized networks and for a realistic network of Japanese prefectures. Our model successfully replicates the observed pattern, supporting the notion that a centre–periphery social structure underlies the emergence of concentric patterns. (Abstract excerpt)

For a mathematical treatment of geographical patterning of dialect variants in the presence of the centre–periphery structure, we need a model considering linguistic influences among multiple groups of people. One commonly used framework is the gravity model, in which the mutual influence of two centres (towns, cities, etc.) is assumed to be proportional to the product of their populations and inversely proportional to the squared distance between them. This model predicts that linguistic features first diffuse from city to city, skipping the rural area in between. (2)

Tehrani, Jamshid. The Phylogeny of Little Red Riding Hood. PLoS One. 8/11, 2013. The Iranian-British, Durham University anthropologist deftly describes another way that genetic analysis methods are equally suitable for all forms of literature, such as folktales, and in so doing how these classics have a common genre in many related versions.

Researchers have long been fascinated by the strong continuities evident in the oral traditions associated with different cultures. According to the ‘historic-geographic’ school, it is possible to classify similar tales into “international types” and trace them back to their original archetypes. However, critics argue that folktale traditions are fundamentally fluid, and that most international types are artificial constructs. Here, these issues are addressed using phylogenetic methods that were originally developed to reconstruct evolutionary relationships among biological species, and which have been recently applied to a range of cultural phenomena. The study focuses on one of the most debated international types in the literature: ‘Little Red Riding Hood’. A number of variants have been recorded in European oral traditions, and it has been suggested that the group may include tales from other regions, including Africa and East Asia. To shed more light on these relationships, data on 58 folktales were analysed using cladistic, Bayesian and phylogenetic network-based methods. These findings demonstrate that phylogenetic methods provide a powerful set of tools for testing hypotheses about cross-cultural relationships among folktales, and point towards exciting new directions for research into the transmission and evolution of oral narratives. (Abstract excerpts)

Thompson, Elaine, et al. Hemispheric Asymmetry of Endogenous Neural Oscillations in Young Children. Nature Scientific Reports. 6/19737, 2016. A confluence of advances in neuroscience along with works such as by Maryanne Wolf (2007) and Laura Otis (2015) are finding that our human perception of speech and text involves much more than left hemisphere language. An interplay of both this detail mode with expansive right half emotional, prosodic rhythm, (e.g. hand movement), is needed for full communication and comprehension. Here a team from coauthor Nina Kraus’ Auditory Neuroscience Laboratory at Northwestern University quantify how faster, high-frequency inputs go leftward, while lower-frequency, slower sounds go to the right cerebral side. The significant result is to perceive, in still another occasion, this common complementarity, which again align with gender archetypes.

Speech signals contain information in hierarchical time scales, ranging from short-duration (e.g., phonemes) to long-duration cues (e.g., syllables, prosody). A theoretical framework to understand how the brain processes this hierarchy suggests that hemispheric lateralization enables specialized tracking of acoustic cues at different time scales, with the left and right hemispheres sampling at short (25 ms; 40 Hz) and long (200 ms; 5 Hz) periods, respectively. In adults, both speech-evoked and endogenous cortical rhythms are asymmetrical: low-frequency rhythms predominate in right auditory cortex, and high-frequency rhythms in left auditory cortex. It is unknown, however, whether endogenous resting state oscillations are similarly lateralized in children. We investigated cortical oscillations in children (3–5 years; N = 65) at rest and tested our hypotheses that this temporal asymmetry is evident early in life and facilitates recognition of speech in noise. We found a systematic pattern of increasing leftward asymmetry for higher frequency oscillations. The observed connection between left-biased cortical oscillations in phoneme-relevant frequencies and speech-in-noise perception suggests hemispheric specialization of endogenous oscillatory activity may support speech processing in challenging listening environments, and that this infrastructure is present during early childhood. (Abstract)

Torre, Enrico. Language as an Emergent Construction Network. Ecological Psychology. 27/3, 2016. Based on a topical, disseration study of Italian idioms, a Lancaster University, UK linguist affirms that conversation and literature are similarly formed, suffused, and distinguished by the same nonlinear dynamical systems as everywhere else. This complex intersubjective experience evolves over time as an ongoing process of self-organization as a result of a multiplicity of context-bound interactions between intentional agents and their physical and sociocultural environment.

In this contribution, I investigated the structure of Italian idioms from a perspective that combines insights from constructionist and dynamic-systems approaches to language. On the basis of the tendencies observed in the analysis, I observed that the patterns of stability and variation of idioms in use can be satisfactorily accounted for in dynamic-systems terms. I then argue that the use of idiomatic constructions is governed by a principle of causal circularity, whereby the attractor state constrains the possible use of a construction, but at the same time the bulk of occurrences of an idiom shapes the attractor in an ongoing, nonlinear process of self-organization. Looking beyond idioms, I propose that similar mechanisms may regulate the functioning of the linguistic system as a whole, consistent with the constructionist view of language as a network of interconnected units. (Abstract)

Torre, Ivan, et al. On the Physical Origin of Linguistic Laws and Lognormality in Speech. Royal Society Open Science. 6/8, 2019. Five system linguists posted in Madrid, London, and Merced, CA including Lucas Lacasa and Chris Kello offer another attempt across this spatial and temporal expanse to quantify a deep connection between physics and prosodic prose. Nature’s generative, law-abiding mathematics appear to instantiate themselves wherever they can, no less in our voluminous literature and conversations.

In this work, we examine whether linguistic laws hold with respect to the physical manifestations of linguistic units in spoken English. The data we analyse come from a phonetically transcribed database of acoustic recordings known as the Buckeye Speech corpus. First, we verify that acoustic durations of linguistic units at several scales comply with a lognormal distribution, and justify this using a stochastic generative model. Second, we explore the classical linguistic laws (Zipf’s, Herdan’s, Brevity and Menzerath–Altmann’s) in oral communication, both in physical units and in symbolic units measured in the speech transcriptions. Altogether, these results support the hypothesis that statistical laws in language have a physical origin. (Abstract excerpt)

Vitevitch, Michael and Rutherford Goldstein. Keywords in the Mental Lexicon. Journal of Memory and Language. 74/2, 2014. A typical current article by University of Kansas psychologists who are applying network science to all manner of speech and script format and conveyance. See Vitevitch’s publication page for citations since 2008 which say that just as everywhere else in nature and society is illuminated this way, such textual and spoken venues likewise exemplify such dynamic node and link topologies. As this universal corpus is averred, it could infer an innately literal cosmos to children narrative.

Westling, Louise. The Logos of the Living World: Merleau-Ponty, Animals and Language. New York: Fordham University Press, 2014. The University of Oregon professor of English and environmental studies turns to the French phenomenological philosopher Maurice Merleau-Ponty (1908-1961) for an encompassing nature which in its deepest essence is a literary, textual milieu. Our human role and effort is to learn to read, write, and express this logos script and message. His mid 20th century musings are then seen in much accord with a growing biosemiotic interpretation. For more reference, I log in this entry along with a 2014 PNAS paper by Christos Papadimitriou (search) that a natural “algorithmic” component and source from which animate complexity and cognizance arises has been missing or neglected.

Merleau-Ponty wanted to explain how a kind of mute meaning or Logos is everywhere in the primordial or wild Being that is our only environment and that of every organism on the planet. The function of human language and culture is to make this meaning visible and to extend it. (136)

Woods, Christopher, ed. Visible Language: Inventions of Writing in the Ancient Middle East and Beyond. Chicago: Oriental Institute of the University of Chicago, 2010. From our global 21st century, humankind can look back and survey the myriad historical inscriptions from cuneiforms and hieroglyphs to alphabets as if the universe is trying to write itself down. Alas in the current postmodern academia, it has been concluded there is no extant narrative, nothing to learn to read.

Writing, the ability to make language visible and permanent, is one of humanities' greatest inventions. This book presents current perspectives on the origins and development of writing in Mesopotamia and Egypt, providing an overview of each writing system and its uses. Essays on writing in China and Mesoamerica complete coverage of the four "pristine" writing systems — inventions of writing in which there was no previous exposure to texts. The authors explore what writing is, and is not, and sections of the text are devoted to Anatolian hieroglyphs of Anatolia, and to the development of the alphabet in the Sinai Peninsula in the second millennium BC and its spread to Phoenicia where it spawned the Greek and Latin alphabets.

Xie, R. R., et al. Quantitative Entropy Study of Language Complexity. arXiv:1611.04841. Central China Normal University, Wuhan University of Technology, and University of Bergen, Norway researchers find even our human literature to be rooted in and moved by thermodynamic energies.

We study the entropy of Chinese and English texts, based on characters in case of Chinese texts and based on words for both languages. Significant differences are found between the languages and between different personal styles of debating partners. The entropy analysis points in the direction of lower entropy, that is of higher complexity. Such a text analysis would be applied for individuals of different styles, a single individual at different age, as well as different groups of the population. (Abstract)

When we chose the word as the quantum of a language we are in a similar situation as the early thermodynamics. A constant is remaining to be determined, to compare the entropy of the language to that of the ideal gases or the Human DNA sequence. (1)

[Prev Pages]   Previous   | 6 | 7 | 8 | 9 | 10 | 11 | 12  Next