(logo) Natural Genesis (logo text)
A Sourcebook for the Worldwide Discovery of a Creative Organic Universe
Table of Contents
Introduction
Genesis Vision
Learning Planet
Organic Universe
Earth Life Emerge
Genesis Future
Glossary
Recent Additions
Search
Submit

IV. Ecosmomics: Independent, UniVersal, Complex Network Systems and a Genetic Code-Script Source

2. Biteracy: Natural Algorithmic Computation

Wolfram, Stephen. Logic, Explainability and the Future of Understanding. Complex Systems. 28/1, 2019. The polymath prodigy (bio below) now can provide a 40 page technical survey of the history, present and preview of philosophical knowledge by way of its computational basis. In this journal edited by Hector Zenil, see also On Patterns and Dynamics of Rule 22 Cellular Automaton by Genaro Martinez, et al (28/2).

Stephen Wolfram is the creator of Mathematica, Wolfram|Alpha and the Wolfram Language; the author of A New Kind of Science; and the founder CEO of Wolfram Research. Born in London in 1959, he was educated at Eton, Oxford and received his PhD in theoretical physics from Caltech at age 20. Wolfram's early scientific work was mainly in high-energy physics, quantum field theory and cosmology. Over the course of four decades, he has been responsible for many discoveries, inventions and innovations in computer science and beyond. (www.stephenwolfram.com)

Wolpert, David, et al, eds. The Energetics of Computing in Life and Machines. Santa Fe: Santa Fe Institute Press, 2018. These highly technical proceedings from SFI seminars consider more efficient computational methods by way a better, deeper integration with vital principles and procedures. For example see Overview of Information Theory and Stochastic Thermodynamics of Computation by Wolpert (search), Information Processing in Chemical Systems by Peter Stadler, et al, and Automatically Reducing Energy consumption of software by Stephanie Forrest, et al.

Why do computers use so much energy? What are the fundamental physical laws governing the relationship between the precise computation run by a system, whether artificial or natural, and how much energy that computation requires? Can we learn how to improve efficiency in computing by examining how biological computers manage to be so efficient? The time is ripe for a new synthesis of systems physics, computer science, and biochemistry. This volume integrates pure and applied concepts from these diverse fields, with the goal of cultivating a modern, nonequilibrium thermodynamics of computation.

Woods, Damian, et al. Diverse and Robust Molecular Algorithms Using Reprogrammable DNA Self-Assembly. Nature. 567/366, 2019. Seven CalTech and Harvard bioinformatic researchers including Erik Winfree and David Doty (search each) advance understandings of how nature’s helical nucleotides can be availed for many more chemical, structural, data storage uses beyond replication. Who then are we cosmic curators to learn all about and intentionally take up life’s organic procreativity?

Molecular biology provides a proof-of-principle that chemical systems can store and process information to direct molecular activities such as the fabrication of complex structures from molecular components. Mathematical tiling and statistical–mechanical models of molecular crystallization have shown that algorithmic behaviour can be embedded within molecular self-assembly processes by DNA nanotechnology. Here we report the design and validation of a DNA tile set that contains 355 single-stranded tiles and can be reprogrammed to implement a wide variety of 6-bit algorithms. These findings suggest that molecular self-assembly could be a reliable algorithmic component within programmable chemical systems. (Abstract excerpt)

Wu, Jun. The Beauty of Mathematics in Computer Science. Boca Raton: CRC Press, 2019. This popular text in Chinese by a Google Brain USA member (bio below) is here published in English. It’s main message is to show the deep affinities between algorithmic code, scriptural languages and innate mathematic principles

The book covers many topics including Natural language processing, Speech recognition and machine translation, Statistical language modeling, Quantitive measurement of information, Pagerank for web search, Matrix operation and document classification, Mathematical background of big data, and Neural networks and Google’s deep learning.

Jun Wu was a research scientist who invented Google’s Chinese, Japanese, and Korean Web Search Algorithms. He wrote official blogs introducing Google technologies in simple languages for Chinese Internet users from 2006-2010. Wu received PhD in computer science from Johns Hopkins University and has been working on natural language processing for more than 20 years.

Xu, Lei. Further Advances on Bayesian Ying-Yang Harmony Learning. Applied Informatics. Online June, 2016. The Chinese University of Hong Kong chair of computer science provides a succinct update on his project since the 1990s to join traditional Chinese organic philosophy with Bayesian probabilities so to achieve a preferred way, via an intricate mathematics, to gather and realize natural knowledge. On the author’s CUHK website is a steady list of publications which explore and refine these unique insights. If one may distill, sentences such as Ying (feminine principle) is primary and comes first, while the Yang (male) is secondary and bases on the Ying express and qualify a dynamic reality with both an inner, constant, code-like source, and an overt, manifest animate structure. By whatever terms, its once and future essence is a universally evident gender complementarity, by creative turns Ying, Yang, and may we say Taome.

His 2010 Bayesian Ying-Yang System, Best Harmony Learning, and Five Action Circling paper (Frontiers of Electrical and Electronic Engineering in China. 5/3), presents a cogent statement of this creative cosmology, from which we quote next. We also note On Essential Topics of BYY Harmony Learning in the same journal (7/1, 2012, each in English by Springer). Its actual validity can be established because it comes closest to an independent phenomenal nature. By this perception, a prime, holistic feminine phase envisions knowledge from subsidiary masculine data reports, which as internal and external phases models well the dynamic learning function of bicameral right and left brain hemispheres. There is great wisdom here which needs to be appreciated and availed.

Complementary composition of Ying-Yang system: A system that survives or interacts with its world is able to be functionally divided into two different but complement parts. One is called Yang that inputs from its external world called Yang domain and transforms what gathered via a Yang pathway into an inner domain; while the other is Ying that consists of this inner domain called Ying domain and a Ying pathway. The Ying domain accumulates, integrates, digests, and condenses whatever came from Yang, and the Ying pathway selects among the Ying domain the best ones to produce the reconstructions back to the Yang domain. (321, 2010) Instead of simply regarding Ying-Yang as two opposite parts, which was frequently misunderstood by westerns, the major natures of a Ying-Yang pair are described by the following propositions: (1) Ying is primary, while Yang is secondary and comes from Ying, (2) Ying and Yang are not exclusive each other, though they were sometimes misunderstood by ones from a logical perspective. (321, 2010)

Lei Xu is a chair professor of The Chinese University of Hong Kong in computer science. He graduated from Harbin Institute of Technology in 1981, and completed his master and Ph.D thesis at Tsinghua University during 1982–1986. During 1989–1993, he worked at several universities in Finland, Canada and USA, including Harvard and MIT. He joined CUHK in 1993, became professor in 1996 and took the current position since 2002. Prof. Xu has published dozens of journal papers and also many papers in conference proceedings and edited books, covering the areas of statistical learning, neural networks, and pattern recognition, with a number of well-cited papers, e.g., his papers got over 5500 citations according to Google Scholar.

Bayesian probability is one interpretation of the concept of probability. In contrast to interpreting probability as frequency or propensity of some phenomenon, Bayesian probability is a quantity that is assigned to represent a state of knowledge,[1] or a state of belief. The Bayesian interpretation of probability can be seen as an extension of propositional logic that enables reasoning with hypotheses, i.e., the propositions whose truth or falsity is uncertain. In the Bayesian view, a probability is assigned to a hypothesis, whereas under frequentist inference, a hypothesis is typically tested without being assigned a probability. (Wikipedia)

Applied Informatics covers the theory and application of informatics in various scientific, technological, engineering and social fields. Aiming to inspire new multidisciplinary research, the journal acts as an integrative venue that collects high-quality original research papers and reviews on various aspects of applied informatics, with the foundations of informatics (information theory, statistical modeling, machine learning, etc) as the driving core and the interactions between essential realms as the promoting focuses; particularly important are the interactions between (a) life sciences (bioinformatics, medical informatics, bioengineering);(b) intelligence sciences (neural and cognitive informatics, multimedia, pattern recognition); and (c) community sciences (social networks, affective computing, big data analytics).

Yang, Shengxiang and Xin Yao, eds. Evolutionary Computation for Dynamic Optimization Problems. Berlin: Springer, 2015. An edition amongst a burst of books which engage and implement the growing notice that nature’s cosmic to culture development involves a program source code which seems on a course to achieve its own intelligent self-realization and intentional continuance. A typical chapter might be Memetic Algorithms for Dynamic Optimizations Problems. See also Nature-Inspired Metaheuristic Algorithms by Xin-She Yang (Luniver Press, 2010).

Yang, Xin-She. Nature-Inspired Optimization Algorithms. Amsterdam: Elsevier, 2014. Reviewed more in Universal Darwinism, the work is a latest survey upon life’s evolutionary proclivity to search, test and reach good-enough solutions by way of such operations.

Yang, Xin-She. Socail Algorithms. arXiv:1805.05855. The Middlesex University, UK computational mathematician and author (search) posts a chapter for the 2020 edition of the Encyclopedia of Complexity and Systems Science (Meyers) which explains an array of features that distinguish this integral class of natural computations. See also Swarm Intelligence: Past, Present and Future by the author at 1804.07999.


This article concerns the review of a special class of swarm intelligence based algorithms for solving optimization problems and these algorithms can be referred to as social algorithms. Social algorithms use multiple agents and the social interactions to design rules for algorithms so as to mimic certain successful characteristics of the social/biological systems such as ants, bees, bats, birds and animals. (Abstract)

Social Algorithms: Social algorithms are a class of nature-inspired algorithms that use some of characteristics of social swarms such as social insects (e.g., ants, bees, bats and fireflies) and reproduction strategies such as cuckoo-host species co-evolution. These algorithms tend to be swarm intelligence based algorithms. Examples are ant colony optimization, particle swarm optimization and cuckoo search. (Glossary, 2)

Many optimization problems in science and engineering are challenging to solve, and the current trend is to use swarm intelligence (SI) and SI-based algorithms to tackle such challenging problems. Some significant developments have been made in recent years, though there are still many open problems in this area. This paper provides a short but timely analysis about SI-based algorithms and their links with self-organization. Different characteristics and properties are analyzed here from both mathematical and qualitative perspectives. (1804.07999).

Yang, Xin-She and Joao Paulo Papa, eds. Bio-Inspired Computation and Applications in Image Processing. Amsterdam: Elsevier, 2016. In this volume, the Middlesex University London mathematician and author is joined by a Sal Paulo State University computer scientist. As these biomimetic ways and means gain wide usage, a typical chapter is Fine-Tuning Deep Belief Networks Using Cuckoo Search.

In this volume, the Middlesex University London mathematician and author is joined by a Sal Paulo State University computer scientist. As these biomimetic ways and means gain wide usage, a typical chapter is Fine-Tuning Deep Belief Networks Using Cuckoo Search.

Yang, Xin-She, et al, eds. Swarm Intelligence and Bio-Inspired Computation. Amsterdam: Elsevier, 2013. An introductory text for spontaneous optimizations due to natural, organics agencies of stochastic, evolutionary, dynamic self-emergence. As noted in this section, the way social insects and animal groupings achieve this can be a good source and guide.

Swarm Intelligence is the collective behavior of decentralized, self-organized systems, natural or artificial. SI systems consist typically of a population of simple agents or boids interacting locally with one another and with their environment. The inspiration often comes from nature, especially biological systems. The agents follow simple rules, and although there is no centralized control structure dictating how individual agents should behave, local, to a certain degree random interactions between such agents lead to the emergence of "intelligent" global behavior, unknown to the individual agents. Examples in natural systems of SI include ant colonies, bird flocking, animal herding, bacterial growth, fish schooling and microbial intelligence. (Wikipedia)

Zarco, Mario and Tom Froese. Self-Optimization in Continuous-Time Recurrent Neural Networks. Frontiers in Robotics and AI. Online August, 2018. In an entry edited by Claudius Gros and reviewed by Richard Watson, Universidad Nacional Autónoma de México mathematicians glimpse an evidential presence even in cerebral cognitions of nature’s universal process of many relatively random states, in this case neurons, which iteratively evolve into viable “attractor configurations.”

A recent advance in complex adaptive systems has revealed a new unsupervised learning technique called self-modeling or self-optimization. Basically, a complex network that can form an associative memory of the state configurations of the attractors on which it converges will optimize its structure: it will spontaneously generalize over these typically suboptimal attractors and thereby also reinforce more optimal attractors. This technique has been applied to social networks, gene regulatory networks, and neural networks, but its application to less restricted neural controllers, as typically used in evolutionary robotics, has not yet been attempted. Here we show for the first time that the self-optimization process can be implemented in a continuous-time recurrent neural network with asymmetrical connections. (Abstract excerpts)

Zarcone, Ryan, et al. Analog Coding in Emerging Memory Systems. Nature Scientific Reports. 10/6831, 2020. We cite this entry by ten UC Berkeley, Stanford, IBM Research, Macronix International, and Google Brain researchers including Jesse Engel as another example, akin to Miguel Nicolelis’ neuroscience (search), of the optimum value of complementary digital byte and analog link pairings. These archetypal modes are now seen to hold across nature’s genesis, except for political elections where they remain pitted against each other, unaware of any scientific findings.

Exponential growth in data generation and big data science has created an imperative for low-power, high-density information storage. This need has motivated research into multi-level memory devices capable of storing multiple bits per device because their memory state is intrinsically analog. Furthermore, much of the data they will store, along with the subsequent operations, are analog-valued. However the current storage paradigm is quantized for use with digital systems. Here, we recast storage as a communication problem, which allows us to use ideas from analog coding and show that analog-valued emerging memory devices can achieve higher capacities. (Abstract excerpt)

Previous   1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10  Next