What are the things that control evolution?
Is it still worth discussing about DARWIN's work today, especially about his main work "On the origin of species by means of natural selection or the preservation of favored races in the struggle of life", published in 1859? Many extensive works have appeared on this in the last 130 years, including E. MAYR's "The growth of biological thought" (1982). MAYR makes it clear that DARWIN had not just one but a total of five primarily independent theories:
- Evolution itself,
- common ancestry,
- gradual evolution (gradualism),
- Species formation as a population phenomenon,
- natural selection.
Each of these theories has been hotly debated, and only over the course of decades have they all gained increasing acceptance.
Like no other, Ernst MAYR constantly pointed out misinterpretations, confusion of terms, and above all the phenomenon of selection; he regards it as a great achievement of biological research in this century to have recognized the "unity of biology". He emphasizes the dichotomy of genotype and phenotype, makes it clear that natural selection starts with the phenotype and not the genotype, and he shows the great importance of genetics in clarifying evolutionary processes. It clearly shows that one always has to deal with complexity in biology and that the attempt to reduce biology to physics is a misunderstanding. MAYR presents a wealth of evidence for the correctness and general validity of DARWIN's theories of evolution, but he does not make use of all the knowledge gained through research during this century. He hardly deals with microorganisms at all, plants only marginally, he does not include cells in his line of argument, and some of the modern knowledge of molecular biology to further support the evolutionary idea is not fully explored. Nevertheless: none of the apparently omitted findings reduce the scope of his statements. P. SITTE has shown in a recently published article in which he emphasizes the importance of the cell, that the paths of biology went from the complex to the elementary. That is plausible, because people perceive complex things with their senses; they first try to find their way around in their environment (a real selection advantage in the Darwinian sense), and only after they have enough information do they try to clarify of causes.
The advance in biological research in this century has been based in large part on the design and use of appropriate research equipment. Since the results obtained in this way (in cell biology, molecular biology, etc.) were (and are) primarily only accessible to a small number of experts, they hardly formed starting points for a public and / or ideological discussion. This has only got going in recent years in connection with genetic engineering and questions of safety.
If one takes up the subject of evolutionary research again at the end of the 20th century, one must start from the current level of knowledge in the natural sciences. Unlike the cognitive process in biology, the evolution of organisms proceeded from the simple (from the elementary) to the complex.
The 20th century is considered by biologists to be the century of heredity or genetics, with the first half being the epoch of classical genetics and the second half being that of molecular genetics. DARWIN knew only little about the actual mechanisms underlying a species formation: He himself developed an inheritance theory, the Pangenesis theory, which can at least be used as an explanation of the "common descent". Around 1930/1940 there were already sufficiently reliable statements about the inheritance (of characteristics), so that DARWIN's theories could be viewed anew from this point of view. T. DOBSZHANSKY (1937), J. HUXLEY ("Evolution: The modern synthesis", 1942) and G. L. STEBBINS (1950) summarized the results and thus established the "synthetic theory" (neo-Darwinism). . Population geneticists (R. A. FISHER, 1930, S. WRIGHT, 1931) defined the terms fitness and selection mathematically and made the corresponding phenomena accessible to quantitative analysis.
Here, however, MAYR's repeated objection applies that the individual gene or allele is never decisive for selection, but rather the integrity of the genome. The whole genome is necessary for the expression of the phenotype, and therefore it determines its suitability, so that the mathematical models developed on a gene (or allele) basis are largely irrelevant for understanding evolution.
In addition to genetics, two other disciplines gained importance in understanding evolution in this century. Once the biochemistry, through it a postulate made by M. SCHLEIDEN as early as 1842 was fulfilled:
"It has already been noted earlier that we have not explained anything in the life of the plant as long as we have not demonstrated the physical or chemical processes on which it is based, and it is precisely for this that it is essential that we conduct our investigations the simplest case, starting with the single cell. ",
on the other hand, information and systems theory. These do not apply exclusively to biology, but provide us with a solid operational basis, especially for evolutionary research, in order to organize the accumulated knowledge in a clear manner and to make it easier to understand the increase in complexity and the level of organization.
Systems theory allows us to classify systems and the relationships between the individual system elements among and with one another. Systems can be arranged on different hierarchical levels (sub-systems, higher-level systems, etc.). In order to analyze a system, one does not necessarily need to know the details within the subsystems. It has therefore proven useful to regard such as a "black box".
In biology, the following hierarchical levels (hierarchical levels) can be established as a rough approximation:
DARWIN concentrated (according to the knowledge of its time) on the observation of organisms (species) and the interactions between them and their environment. He knew nothing about ecosystems, but knew some of their system elements, he knew little about the cellular structure of organisms, but knew many of their properties (e.g. perceiving environmental stimuli through sensory organs and reacting to them in a corresponding way). DARWIN was familiar with "time" and used it to explain changes and the accumulation of changes that would eventually lead to the emergence of new species. He showed that organisms (species) adapted to one another and to their environment in the course of time; He made it clear, among other things, that the variety of flower colors, scents and shapes would not exist without the insects as pollinators. He was therefore able to form the idea of selection without knowing the underlying causes of selection.
Today, selection can be understood as a property of matter that is always recognizable when different "units" (e.g. particles of all kinds, cells, organisms, organisms and their environment) interact with one another. In this way, DARWIN's selection theory can also be extended to organizational levels of lower complexity (systems of lower hierarchical levels) and its general validity can be proven. But if we face this problem today, we have to ask ourselves what causes a system formation is based and what regularities are recognizable on the individual hierarchical levels.
Mutation, recombination and selection are now commonly seen as the causes of evolution. DARWIN did not know the term mutation, he spoke of mutability, of variations and of varieties. We understand mutation to mean a change in heredity, but we now also know that there are numerous, fundamentally different mechanisms for changing genetic information. A distinction must be made as to whether individual genes (hereditary disposition) or the entirety of the genes (the genome) are changed. Point mutations, grid mutations, insertion or elimination of insertion elements and changes in the signal segments required for expression are just some of the mechanisms that lead to the mutation of individual genes and thus influence the development of the phenotype. Chromosome mutations, duplication or deletion of genes, restructuring of the genome, and recombinations are examples of effects affecting the whole genome. To understand the evolutionary process, it is often crucial to recognize which of the mentioned mechanisms is decisive for a genetic change.
Recombination has been seen in recent years only as the result of recombining genes during sexual reproduction or reciprocal crossing over. For several years the so-called mobile genetic elements (transposons, insertion elements) have been gaining increasing interest in evolutionary researchers. These DNA segments can move around in the genome; They usually cause a (reversible) inactivation of existing genes or a restructuring of parts of the genome. It is becoming increasingly clear that they thus become a diversification factor and influence the change in the genome more decisively than simple point mutations.
On the other hand, we can understand selection primarily as a material property, but secondarily also as a system property that always comes into play when different "units" (e.g. particles of all kinds, cells, organisms, organisms and their environment) interact with one another; In other words, there is always selection or choice when there is a willingness to react and the units reacting with one another are not present in exactly the same number (at the same place, at the same time). If this is the case, selection is inevitable.
Selection exists simply because there are different elementary particles that (almost) never in isolation, but (almost) without exception can exist in association with other elementary particles. One of their elementary properties is that they interact with others. But only very specific elementary particles in a specific number accumulate and thus create a system of a higher order with properties of a newer quality. Nowhere is there an excess of free atomic nucleus-forming elementary particles of a certain type. The atomic nucleus forms an atom together with electrons.
92 natural elements occur. Due to the properties of their outer electron shells, they can react with each other and thus in turn form a new organizational form of higher complexity - a molecule. The decisive factor here is the phenomenon of complementarity, of matching. One of the reaction partners (atoms) supplies electrons, the other accepts them or both contribute one electron each to the formation of a common electron pair. The formation of crystals is also based on an aggregation of similar (or mutually complementary) units to the exclusion of all others. All of this is now very simplified, but at this point it can suffice to emphasize the importance of complementarity for the selection. And another example of selection with which we can steer purposefully towards biology: none of the 92 elements is as versatile in terms of molecule formation as carbon. There is almost no carbon-free molecule in organisms. But: The range of molecules in organisms does not include everything that carbon is capable of. Organic chemists have synthesized an abundance of new types of molecules in the last few decades that are not found in any cell. Only the numerous polymers that make up the most varied of synthetic dyes and plastics are mentioned here, as well as the fullerenes that were only discovered a few years ago.
But how do we get from here to biology, life and evolution? We have to try to understand the emergence of increasingly complex structures and functions. No theory of evolution exactly describes the successive processes. The only questions that can be asked are: what if? Which requirements must be met so that a development can take place and / or a certain form can arise? What are the consequences of such a development, what other system components are influenced by it? An inventory must be made at all times so that the questions can be asked anew. The answers must be such that they can be reconciled with observations and / or experimental findings.
Before the evolution of living systems can be discussed, two more statements from the field of experimental chemistry:
It has been shown that almost all small molecules detected in cells can be synthesized under abiotic conditions (S. MILLER, 1955).
Under certain abiotic conditions, small molecules (e.g. amino acids and nucleotides) can polymerize to macromolecules (polypeptides, proteinoids and polynucleotides) (S. W. FOX, 1965, L. E. ORGEL, 1973).
It is precisely these macromolecules that form the most important functional elements of every cell. Polymers in turn represent a higher level of complexity. They are distinguished from small molecules (subunits from which they are made up) by properties of a new quality. Proteinoids can have catalytic properties, even if they do not have the high efficiency and specificity of "modern" cell proteins (enzymes) and they have the tendency to aggregate into larger complexes (e.g. microspheres = hollow spheres of a certain size, consisting of many similar proteinoid molecules). One speaks therefore of self-assembly or self-organization. The ability to aggregate is based on the formation of weak interactions (electrostatic interactions, van der Waals forces, hydrogen bonds, etc.). Each of these interactions or interactions is far weaker than a chemical (covalent) bond; however, they can add up to form extremely strong associations. Almost all biological structures are stabilized in this way, be it molecular parts of a polypeptide chain that fold into a functional three-dimensional structure, be it the two complementary DNA strands that unite to form the Watson-Crick double helix, or certain protein and Nucleic acid molecules that form a functional ribosome (a structure visible with an electron microscope).
Membranes, by which the cell content is separated from the environment, and by which the individual compartments of a cell are also surrounded, consist of lipids and proteins, which are held together exclusively by weak interactions; The cells in every multicellular cell, no matter whether human or plant, form stable associations in this way; Sex cells recognize the right partner because they carry complementary molecules (effectors and receptors) on their surfaces, which form a molecular complex before the cells unite. A distinction can thus be made between foreign and non-foreign, specificity is guaranteed. Even the adherence of pollen to the body of an insect is based on such (but rather unspecific) interactions. The lack of specificity is based on the lack of complementary patterns on the surfaces of the two reaction partners (pollen, insect).
Structures stabilized by weak interactions have a set of new properties. The stronger the degree of structural complementarity, the more weak interactions are formed and the more stable the complex is. At this point the comparison with the lock and key system may be appropriate. Only a key, the structure of which is complementary to that of the lock, fits and closes.
On the other hand, weak interactions can be resolved under certain circumstances, and this property has also been extensively used and optimized in the course of the evolution of organisms: protein molecules (e.g. the biological catalysts, the enzymes) can change their structure under the influence of other molecules, theirs catalytic activity decreases or increases - and that means that their activities can be regulated (again a property of new quality); Specifically formed, only temporarily open pores can arise in membranes, which ensure the passage of very specific molecules from the inside to the outside (or vice versa) (semi-permeability); The membranes of two neighboring cells can detach at the contact point and then unite to form a uniform membrane that surrounds the two cells, for example during the fusion of sex cells (fertilization). Some cells have the ability to migrate through tissue structures, temporarily loosing the contacts between neighboring cells. And finally, our pollen-insect example: the pollen is easy to remove and can therefore be transferred to a scar, if necessary. However, whether it ends up here or completely elsewhere depends not only on the aforementioned attachment to the insect, but above all on its behavior. Selection is involved everywhere.The weak interactions in particular are a crucial prerequisite for the creation, evolution and maintenance of living systems.
But what characterizes a living system compared to a dead system or a collection of molecules. Generally speaking, the molecules must be in constant direct relationship to one another, there must be a collection of the "correct" molecules, all molecular interactions must proceed according to a controllable program and continuity (= growth and multiplication) must be guaranteed. How can this explain the evolution of a first cell or a preliminary stage of it? The Göttingen biophysicist M. EIGEN has been dealing with these problems since the early 1970s. He postulated that the physicochemical laws have not changed since the beginning of evolution and he started from the knowledge that all living cells contain proteins and nucleic acids. Modifying the question of which came first, the hen or the egg, he asked whether there could have been evolution with proteins alone or with nucleic acids alone. He could rule out both alternatives. Proteins have catalytic properties and can aggregate into cell-like structures, but they cannot bring about evolution because changes are not stored in such a system. Nucleic acids alone cannot lead to stable living systems either. Although they can replicate themselves, develop catalytic properties and carry information to a small extent, replication (doubling of the two strands of a Watson-Crick double helix) is not error-free; This does not guarantee stability. Only an interaction between nucleic acids and proteins leads to a stable system, provided certain requirements are met. The units involved in the interaction must join together to form a hypercycle. This means that a protein molecule's catalytic competence must be used in order to reduce the error rate in the replication of a nucleic acid to a minimum. This nucleotide sequence for its part must at the same time contain information that is used to instruct precisely this particular type of protein molecule. The statistical probabilities that lead to the emergence of a hypercycle and the selection of the right one were determined. Nothing speaks against the fact that such processes developed on earth at the beginning of the evolution of living systems. Some of the sub-steps can be simulated under laboratory conditions. (OWN, 1971, OWN and WINKLER, 1975, OWN, 1987). None of the assumptions presented here proves the existence of a very specific step, none of the assumptions says exactly what such a hypercycle might have looked like. EIGEN's theory as well as that of DARWIN only provides a framework that specifies that none of the assumptions and none of the individual steps contradict physicochemical conditions and / or observations on organisms (species).
It is not for nothing that EIGEN's 1975 work is called "The Game". Every game, e.g. a chess game, follows a few rules that are familiar to everyone, the goal is also given, but not the strategy with which the goal can be achieved. In chess in particular, it has been shown that today's generation of computers and the programs developed do not guarantee that an accomplished player will be beaten.
The hypercycle theory includes a few more aspects:
The reactions can only have taken place in a small, halfway demarcated reaction space. The hypercycle would collapse immediately if any of the essential components were lost through diffusion. The demarcation in turn implies the dichotomy typical of biology between "living system" and its environment.
The term hypercycle may be misleading in a way, because we are not dealing with a cycle in which one molecule interacts with another and then acts back on the first. Rather, a hypercycle represents a chain of successive reactions: a nucleic acid molecule instructs the formation of a new protein molecule and this in turn catalyzes the replication of the nucleic acid, then another protein molecule is formed and so on. In other words, we are dealing with a growth function and thus achieved an essential criterion for living systems. Growth, and thus also multiplication, means that these systems and the necessary system components must be produced in abundance in order to counteract tendencies to decay and to ensure stability. But it also follows from this that the continuity of living systems does not refer to the structural elements themselves. These (e.g. molecules, macromolecules, cells and organisms) only have a limited lifespan. The continuity is determined solely by its properties, above all by the ability to transfer (genetic) information to subsequent cycles (generations) and thus to instruct the synthesis of similar or improved (system-promoting) structural elements.
The information contained in the nucleic acid molecule has a certain value (a selection value). This information has been selected from many possible information. The hypercycle is thus an example that DARWIN's selection theory can also be applied to systems of lower hierarchical levels. In addition: a hypercycle is capable of evolution.
"Evolution means optimization of functional efficiency" writes M. EIGEN (1987). The growth and thus also the multiplication of the functional units of a hypercycle must take place faster than the rate of disintegration of the molecules. The "enemy" in the environment is thermally induced molecular movement. But in order to grow and multiply, raw materials and energy are required, and both have always been in short supply. Therefore, only such a hypercycle could survive, which could handle the resources particularly efficiently. As already explained, all the necessary components can arise under abiotic conditions, but probably not in the required quantities and in the right place, because every growth cycle increases further demand. The selection will therefore favor such a hypercycle, which not only sustains itself, but is also able to produce the required starting substances itself. He must therefore accumulate genetic information that is used to instruct the formation of such proteins, which are required for the synthesis of small molecules such as amino acids, nucleotides, etc. The consequence of this is the evolution of initially simple, then regulatable biosynthetic chains and finally of biosynthetic networks, as we find them today in every cell, regardless of whether it is a microorganism, plant or animal. Ultimately, only the hypercycle has proven itself, which developed the properties that we now collectively call metabolism.
Even when discussing the simple hypercycle, it was noted that the reactions must have taken place in a reasonably demarcated reaction space. It could have been anything, crevices in the rock, possibly even microspheres. With increasing complexity, the value of all components involved in the system increased and therefore there was selection pressure to develop a specific reaction space. However that may have happened, only the standardized membrane made of lipids and proteins that surrounds each cell has prevailed. It separates inside from outside, it is stable enough (provided there are no serious osmotic pressure differences between inside and outside), and it has the property of absorbing certain molecules and secreting others again. There is not much left to speak of a primitive cell. The membrane has another invaluable advantage: its innermost layer acts as an insulator, so an electrical potential can build up between inside and outside, and the trick in the course of cell evolution was to convert the electrical energy stored in the membrane potential into chemical energy. The conversion took place through a number of intermediate stages; electron transport chains in which membrane proteins are involved developed. The end product is the "key currency" of every cell: ATP, a molecule in which chemical energy is stored, which can easily be used to carry out various biosynthetic reactions. The term electron transport chain describes that there is a flow of electrons, so there must be enough electrons available on the right side of the membrane, but bottlenecks soon arise here too. The next trick is based on making use of the sunlight that is always available in sufficient quantities. It was therefore necessary to convert a flow of photons (= light) into a flow of electrons. That worked, the result is the evolution of photosynthesis, initially very primitive, but increasingly efficient.
From now on, the evolution of organisms (at that time still relatively simple, prokaryotic cells) had to take a completely different direction, photosynthesis developed into the most efficient process of energy production ever invented on earth. However, enormous amounts of waste were created in the process: Oxygen collected in the atmosphere and finally reached today's level of 20%, and biomass was also produced, because all cells formed only have a limited lifespan. The chemical energy contained in the biomass could be used by other cells. Such an energy gain (fermentation) is called a heterotrophic way of life. There is good evidence that such cells, which lived in abundance, developed into large amoeboid-like forms, from which later the eukaryotic cells (i.e. cells with a real nucleus) emerged. Much more dangerous was the problem of atmospheric oxygen. Photosynthetic cells thus became the biggest polluters in the environment at that time. Many cells, many species, must have died out. Only those left who could protect themselves from the oxygen and who ultimately even needed the oxygen. The use of oxygen for the optimal utilization of chemical energy (creation of the respiratory chain) took place through modification (alteration) of an already existing photosynthetic electron transport chain. This means that in the course of evolution not everything was reinvented, but that existing functional units could take on new functions.
This principle also applies to superordinate levels of complexity and system: anaerobic large amoeboid cells ingested prokaryotic, aerobic cells (cells that have developed a respiratory chain) and have lived in symbiosis with them ever since. A large part of the genetic information (of the prokaryote) was lost over time, part was transferred to the host cell genome, and a remainder remained in the prokaryotic cell, which then degenerated into a cell organelle (here: a mitochondrion). It was important to maintain an intact, enclosing membrane, because only this guaranteed the continuation of the electron transport chain - and that is what matters and is what matters most. The new system created by symbiosis is a eukaryotic, aerobically living cell (endosymbiont hypothesis). It should be noted that the term "symbiosis" in classical biology means two organisms (species) coming together for mutual benefit (while maintaining their own identity). In the endosymbiosis outlined here, the identity and independence of both partners has been irreversibly lost. Their origin could only be determined by evidence.
Some of the cells created in this way took up photosynthesis-inducing prokaryotes at a later point in time in a further cycle of endosymbiosis and thus developed into plant cells. These, however, faced a new problem again, because during photosynthesis, small molecules that can easily pass through the membrane in both directions (water and carbon dioxide) become somewhat larger (sugar molecules), which are enriched in the cell and thus their osmotic pressure increase considerably, so that sooner or later the cell would burst. To avoid this, the sugar molecules (assimilates, photosynthesis products) can either be polymerized (an osmotically less effective polysaccharide is formed, e.g. starch), or they are excreted; There is such a thing, but it means an enormous loss of energy for the cell. It is now advantageous to polymerize excreted sugar molecules outside the membrane. The result is the formation of a cell wall that can counteract the internal osmotic pressure of the cell. Here too, use was made of weak interactions. Plant cell walls usually consist of the polysaccharide cellulose, the molecules of which are held together by hydrogen bonds, which dissolve during the growth of a plant cell. This means that further cellulose molecules can be stored in the wall. The cell enlarges through elongation growth.
How can you still save energy? By division of labor and by foregoing services for which genetic information is available. Division of labor (specialization) could be achieved through the emergence of multicellular cells, in which most cells only use a small part of the actually available genetic information. The prerequisites for this are the evolution of interdependencies between cells, of information transmission systems, of regulatory mechanisms, of mechanisms for coordinating the activities of different cells and of mechanisms that ensure the "non-use" (repression) of genetic information.
An essential element of regulated systems is negative feedback. One of the system components acts on the activity of a previous one and inhibits it. The mobile genetic elements already mentioned obviously play a prominent role here. In this context, another finding of molecular biology must be mentioned: Almost all eukaryotes have far more DNA in their genome than is required to maintain their existence and their evolutionary potential. Much of the DNA is non-coding. It looks like it is needed for regulatory processes, but only a small proportion of potentially coding DNA is used. The proportion is even lower in specialized cells of a multicellular cell. Differentiation processes are characterized by the activation of certain genes while at the same time inactivating most of the others. This alone shows the eminently important role of negative feedback. The excess of genes in the genomes represents a (easily?) Reactivatable evolutionary potential, which is certainly one of the causes of an efficient and rapid evolution of eukaryotes (especially certain groups: flowering plants, mammals).
Evolution presupposes change, but not everything that could be changed is changed. As already noted - clearly stated by M. EIGEN - the quality "value" plays an eminently important role in evolution. Everything that has proven itself is retained and mechanisms are developed to maintain values. Changes that reduce value fall victim to selection. This inevitably means that living (= growing, multiplying) systems must become increasingly complex, because only new things can be added to the tried and tested. After the interaction between proteins and nucleic acids had proven itself, it was kept unchanged. After the cell metabolism developed, it remained unchanged. Changes were then only allowed to take place at the level of cell-cell interactions, and as already mentioned, this led to the development of multicellular organisms. We get systems on different hierarchical levels. However, since there are basically no error-free physical systems (and therefore also no error-free living systems), the evolution of sophisticated control mechanisms was required to prevent a system breakdown if errors occur in one of the subsystems. It has been shown, for example, that changes (point mutations) accumulate in all of the proteins investigated in the course of time without the properties of the proteins being permanently impaired. These findings were used as an argument in favor of non-Darwinian evolution (theory of neutrality). However, we know from information theory that redundancy is one of the most powerful mechanisms for correcting errors. Noise can be superimposed on every piece of information without endangering the information content. The selection value of the mentioned changes in proteins is therefore that they can be tolerated because they are functionally (almost) equivalent. The superordinate system is selected (here: cell or multicellular organism), whose function is based on the integration of a large number of different genes. All system components are in a (fluid) equilibrium with one another, which in turn can compensate for fluctuations in the performance or properties of the system components.
In the course of the formation (development) of a multicellular cell, numerous interactions take place between the cells involved, which ultimately lead to the establishment of certain patterns (shapes). The formation of the shape is species-specific, i.e. it is controlled by the genome of the cells; in addition, one also knows that external influences (e.g.Light or nutrition) have a lasting effect on the development of an organism, possibly even causing irreversible developmental damage. H. MEINHARDT and A. GIERER (1974) and H. MEINHARDT (1978) showed that pattern formation is a general biological phenomenon. Usually two to three starting substances (products of biosynthetic chains and thus ultimately products of only a few genes) are sufficient to achieve patterns that are common to and in all multicellular structures with a suitable interaction (keywords: activator, inhibitor, effector and receptor, as well as gradient) Encounter organisms. The number of interactions can be as high as desired, both environmental and genetic factors are integrated. The interactions can be described by a set of differential equations. The result is models that are not nearly as perfect and complex as the blueprints of multicellular organisms, but they illustrate how genetic information (in interaction with environmental factors) can be translated into morphological forms and the direction in which research over the next few years will take place has to run to verify the model conceptions. Interestingly, there are similarities between EIGEN's hypercycle model and MEINHARDT-GIERER's pattern formation theory: One of the required components has an autocatalytic effect, i.e. it is required for self-synthesis (nucleic acid in the hypercycle, activator for pattern formation), at the same time it stimulates or instructs the formation of a second Component (protein or inhibitor). The protein (in the hypercycle) and the inhibitor (in the pattern formation) have only one task, namely influencing the nucleic acid synthesis or inhibiting the activator synthesis (effect). In both cases we are dealing with feedback. The difference between the two lies, however, in the fact that the feedback in the hypercycle is positive and thus causes growth, while it is negative in the pattern formation, which induces a control loop that understands the regularities in the pattern formation (alternation of activity maxima and minima) makes.
Construction plans (in the animal and plant kingdoms) are among the best-studied aspects of biology. They also contain conserving elements and are therefore one of the most reliable pillars of the theory of descent. Finally, to return to DARWIN's observations: Organisms (individuals) communicate with each other and react to their environment. With increasing complexity - especially of animal organisms - their competence in gaining information about their environment also increased. The emergence and evolution of sensory organs was the result, at the same time the complexity of the nervous system increased, so that the flood of information from the environment could be processed better and better and led to "functional" behavior. In the simplest case, information is perceived in the form of signals (chemical or physical stimuli) and answered with stereotypical reactions (flight, attack, etc.). With increasing development, the repertoire of behaviors increased. Storage of obtained information has become a prerequisite for learned behavior and, more generally, of learning processes. Humans also have the ability to think, to process what they have learned and experienced, to adjust their behavior to future events, and to pass on their knowledge through language, later also through writing and now through electronic media. A prerequisite for the development of this new quality of information processing is the existence of an excess of neurons (nerve cells) and neural connections in the brain. It has to be far more than is needed to maintain established behaviors and physiological functions. Humans (and probably a number of animal species) not only perceive their environment as the sum of numerous individual stimuli (signals), but can also recognize images and process them as a whole and link partial images to a new image, i.e. simultaneously (in fractions of a second) evaluate complex information . The image evaluation, but also the processing of emotional impressions and feelings, takes place differently than the understanding of logical decisions and the processing of texts, for example, but we do not know how it happens in detail. We know, however, that neurons are linked to form circuits and that an image (a model) is created in the brain.
People can make decisions, they can translate their thoughts into a reality that influences others. Often one remains alone with one's thoughts or deliberately withholds them. The answer "maybe ..." indicates such an evasive reaction. Paradoxically enough, man is capable of logical thinking, but it is often not needed for survival. It is therefore more of an expression of quality of life if you perceive your environment as it is presented undivided. Obviously, not only is there too much genetic information in each cell, but there is also far too much information about our environment. In the course of the evolution of organisms, effective mechanisms have emerged to prevent the expression of superfluous genetic information. That made sense (also energy-saving), there was strong selection pressure in this direction. Why shouldn't it also make sense if people do not perceive or evaluate every information that they are confronted with or with which they could be confronted?
The best model to date for understanding the mechanisms of human thought is the latest generation of computers, including the latest computer languages. They are primarily subject to the laws of logic, but are now so well developed that one speaks of "thinking automatons" and artificial intelligence. Despite all the successes achieved, there is still no computer that can read a somewhat absurd handwriting, let alone evaluate a picture. Above all: computers are - unlike humans - not individuals, they can never be. The computer hardware is based on a silicon technology, silicon is fundamentally different from carbon, although it is also tetravalent and belongs to the same period group as carbon. Silicon is larger than carbon, so all silicon-containing structures are larger than the corresponding carbon-containing structures. This leads to physical problems such as heat build-up, but above all to different reaction times for interactions within the system. Silicon-based cannot produce the wide range of compounds for which carbon is known. Computer technologists, primarily the designers of the microchips, are primarily interested in the construction of reproducible, unchangeable structures (circuits). Silicon technology is predestined for this. But there are no weak interactions (and thus weak points) in the circuits, as are common between brain cells. Brain cells, like all nerve cells (neurons), are characterized by a complex network of runners. Hardly any cell is the same as another (a horror vision for every computer expert), each of them is in contact with numerous (how many?) Other cells via these branches, with many more, with others less, so some things are perhaps easier to learn, others less. Weak interactions are resolved - is that the cause of forgetting or avoiding decisions? A computer is not allowed to do all of this, the material properties determine logically correct circuits, silicon compounds cannot produce the full spectrum of performance that can be achieved through weak interactions. Thinking thus remains a selective advantage acquired by man, which is reserved for him alone.
DARWIN, C .: About the origin of species through natural selection (German edition). Stuttgart: E. Schweizerbartsche Verlagbuchhandlung, 1860
DOBZHANSKY, T: Genetics and the origin of species. New York: Columbia Univ. Press, 1937
EIGEN, M .: Selforganization of matter and the evolution of biological macromolecules. Natural sciences 58, 465 (1971)
EIGEN, M., WINKLER, R .; The game. Laws of nature control chance. Munich, Zurich: R. Piper and Co. 1975
OWN. M .: steps to life. Munich, Zurich: Piper 1987
FISHER, R. A .: The genetic theory of natural selection. Oxford: Clarendon Press, 1930
FOX, S. W .: A theory of macromolecular and cellular origin. Nature 205, 328 (1965)
HUXLEY, J. S .: Evolution, the modern synthesis. London: G. Allen and Unwin, 1942
MAYR, E .: The growth of biological thought. Cambridge (Mass.): The Belknap Press, 1982
MEINHARDT, H., GIERER, A .: Application of a theory of biological pattern formation based on lateral inhibition. J. Cell Sci. 15, 321 (1974)
MEINHARDT, M .: Models for the ontogenetic development of higher organisms. - Rev. Physiol. Biochem. Pharmacol. 80, 47 (1978)
MILLER, S. L .: Production of some organic compounds under possible primitive earth conditions. J. Am. Chem. Soc. 77, 2351 (1955)
ORGEL, L .: The origin of life. Molecules and natural selection London: Chapman and Hall, 1973
STEBBINS, G. L .: Variation and Evolution in Plants. New York: Columbia Univ. Press, 1950
WRIGHT, S .: Evolution in mendelian populations. Genetics 16, 97 (1931).
- Everything that is included in a home inspection
- How are mobile game apps monetized
- What drives the fins on the BattleBots
- Teach English in person
- What are the best Xbox 360 games
- Are Uber and BlaBlaCar competitors
- What was Mother Teresa's miracle
- Do anacondas really exist
- Which language is required in Metasploit
- What are smart people avoiding
- What font is that
- What is GTA 5
- Which web server does Google use
- What does surgical pathology include
- Are there black mountain lions
- Why is there a stock exchange
- Who discovered Alexa
- What is Apple's attitude towards Hackintosh
- How do I keep up the keratin treatment
- What is fetal ketonuria
- Is Delhi's British school boarding school
- How was Sweden founded?
- Are car tires vegan
- Why are we reducing graphene oxide