Donnerstag, 30. November 2017

Inclusion of the axis "Perceiving - Judging" in my typology based on Jung's work

Some time ago, I distinguished four types (thinkers, rulers, doers, connoisseurs) based on whether they make decisions on the basis of objective-rational or subjective-emotional criteria and whether they perceive the world as abstract-theoretical or concrete-practical. Now I have come to the conclusion that it makes sense to include the axis "passive perception or active judgment" in the model. Each of the four types mentioned above has a certain perception and judgement. Some people tend to be more passive or more active in their behaviour.

By the way, my model confirms Uwe Rohr's view that "highly gifted" (he meant by this: rationally thinking people) are more altruistic than others, because they judge much more often (depending on how strongly their preference for rational judgments is pronounced) on the basis of generally binding criteria and think more about what is best for society (or science, or technology - depending on how one takes it) as a whole than what is best for oneself.

I would like to characterize the eight types, which can now be distinguished with one term each:

NTP - Scientist
NTJ - Engineer
NFP - Spiritualist
NFJ - Architect
STP - Inspector
STJ - Craftsman
SFP - Critic
SFJ - Designer

The order of these types corresponds to how much I correspond to the respective types. Since it can be assumed that I have an extremely high percentage of abstract-theoretical tendencies, let's say 95%, and a tendency towards objective-rational judgement, let's say 70%, while I have a slight preference of perhaps 55% for passive perception, I am about 37% scientist, 30% engineer, 16% spiritualist, 13% architect; all other types are hardly pronounced in my case.

A typical Democoder is probably first and foremost an architect, secondly an engineer, thirdly a designer. This means that his preference for F over T is lower than his preference for N over S, but his highest preference is J over P. To a lesser degree, a typical Democoder is therefore also a craftsman, the least of all he has in common with an inspector. Anyway, there is already a difference to my disposition.

Combinations of two adjacent types:
NTP + NTJ = Scientist + Engineer = Concept Theorist
NTP + NFP = Scientist + Spiritualist = Philosopher
NTP + STP = Scientist + Inspector = Professor
NTJ + NFJ = Engineer + Architect = Creator
NTJ + STJ = Engineer + Craftsman = Builder
NFP + NFJ = Spiritualist + Architect = Mysticist
NFP + SFP = Spiritualist + Critic = Inquisitor
NFJ + SFJ = Architect + Designer = Artist
STP + STJ = Inspector + Craftsman = Headman
STP + SFP = Inspector + Critic = Judge
STJ + SFJ = Craftsman + Designer = Artisan
SFP + SFJ = Critic + Designer = Diva

Just considering the top four personality types that apply to me and their neighbours, my calculations for the combinations of two adjacent types, summing up to 100%, lead to the following result: 42% concept theorist, 23% philosopher, 15% creator, 8% mysticist, 12% other combinations (professor, builder, artist).

Einbeziehung der Achse "Wahrnehmend - Urteilend" in meine an Jung angelehnte Typologie

Vor einiger Zeit habe ich vier Typen (Denker, Herrscher, Macher, Genießer) basierend darauf unterschieden, ob sie Entscheidungen eher nach objektiv-rationalen oder subjektiv-emotionalen Kriterien treffen und ob sie die Welt eher abstrakt-theoretisch oder konkret-praktisch wahrnehmen. Nun bin ich zu der Ansicht gelangt, dass es sehr wohl Sinn hat, auch die Achse "passiv-wahrnehmend oder aktiv-urteilend" ins Modell zu nehmen. Jeder der vier genannten Typen hat eine bestimmte Wahrnehmung und ein bestimmtes Urteilsvermögen. Manche Menschen mögen dabei eher zu passivem oder eher zu aktivem Verhalten neigen.

Übrigens bestätigt mein Modell die von Uwe Rohr vertretene Ansicht, dass "Hochbegabte" (er meinte damit: rational denkende Menschen) altruistischer sind als andere, denn sie urteilen (je nach dem, wie stark ihre Präferenz für rationales Urteilen ausgeprägt ist) viel häufiger aufgrund von allgemein verbindlichen Kriterien und denken eher daran, was der Gesellschaft (oder der Wissenschaft, oder der Technik - je nachdem, wie man es nimmt) als Ganzes nützt als was für einen selbst am besten ist.

Die acht Typen, die man nun getrost unterscheiden kann, möchte ich mit jeweils einem Begriff charakterisieren:

NTP - Wissenschaftler
NTJ - Ingenieur
NFP - Spiritueller
NFJ - Architekt
STP - Prüfer
STJ - Handwerker
SFP - Kritiker
SFJ - Designer

Die Reihenfolge dieser Typen entspricht jener, wie sehr ich den jeweiligen Typen entspreche. Da man annehmen kann, dass ich zu einem extrem hohen Prozentsatz abstrakt-theoretisch veranlagt bin, sagen wir zu 95%, und eher zu objektiv-rationalem Urteilen neige, sagen wir zu 70%, während ich allenfalls eine leichte Präferenz von vielleicht 55% für passives Wahrnehmen habe, bin ich zu etwa 37% Wissenschaftler, 30% Ingenieur, 16% Spiritueller, 13% Architekt; alle anderen Typen sind bei mir kaum ausgeprägt.

Ein typischer Democoder dürfte in erster Linie Architekt sein, in zweiter Linie Ingenieur, in dritter Linie Designer. Das bedeutet, dass seine Präferenz für F gegenüber T geringer ist als seine Präferenz für N gegenüber S, aber seine höchste Präferenz J gegenüber P lautet. Zu einem geringen Grad dürfte ein typischer Democoder also auch handwerklich verlangt sein, am wenigsten hat er mit einem Prüfer gemein. Jedenfalls ist hier schon ein Unterschied zu meiner Veranlagung erkennbar.

Kombinationen zweier benachbarter Typen:
NTP + NTJ = Wissenschaftler + Ingenieur = Konzept-Theoretiker
NTP + NFP = Wissenschaftler + Spiritueller = Philosoph
NTP + STP = Wissenschaftler + Inspektor = Professor
NTJ + NFJ = Ingenieur + Architekt = Schöpfer
NTJ + STJ = Ingenieur + Handwerker = Baumeister
NFP + NFJ = Spiritueller + Architekt = Mystiker
NFP + SFP = Spiritueller + Kritiker = Inquisitor
NFJ + SFJ = Architekt + Designer = Künstler
STP + STJ = Inspektor + Handwerker = Aufseher
STP + SFP = Inspektor + Kritiker = Richter
STJ + SFJ = Handwerker + Designer = Kunsthandwerker
SFP + SFJ = Kritiker + Designer = Diva

Wenn man nur die vier wichtigsten Persönlichkeitstypen, die auf mich zutreffen, und deren Nachbarn betrachtet, führen meine Berechnungen für die Kombinationen zweier benachbarter Typen, die sich zu 100% summieren, zu folgendem Ergebnis: 42% Konzept-Theoretiker, 23% Philosoph, 15% Schöpfer, 8% Mystiker, 12% andere Kombinationen (Professor, Baumeister, Künstler).

Mittwoch, 29. November 2017

Medical Microbiology - Philosophical Foundations

At medical school one learns things like: gingivitis (gum inflammation) -> suspicion of coagulase-negative streptococci -> treatment: administration of certain antibiotics. Fine and well, but as an academic you should be more than just a human robot following such a mechanical way of thinking.

Many patients go to the doctor because of the symptoms of the disease and expect that these symptoms will be treated. Doctors often do just that, for example, when they prescribe cortisone to inhibit inflammation. The administration of antibiotics is already a step forward in this respect, because it treats the cause of the disease, namely the bacteria.

The only question is why bacteria are regarded as the cause of disease. Are the symptoms for which the patient visits the doctor not caused by the immune system, which reacts to a bacterial infection with inflammation?

Well, there must be a reason the immune system reacts like this. We enter a philosophical field that was not treated in medical school. Why is it bad when certain strains of bacteria live in the human body? Is the immune system simply about eliminating cells with foreign DNA? Or do these bacterial strains actually interfere with human metabolism and thus impair its function? These are things that are still largely unexplored! Not all bacteria are considered harmful, some of them also help digestion and are therefore regarded as commensals. According to the state of the art, they are left alone by the immune system. On the other hand, some bacteria that are attacked by the immune system are known to secrete toxins. However, this is not known by all bacteria of these species. This would therefore be an area of medicine in which much remains unexplored.

Since I don't work as a doctor, you might think I don't have to deal with medicine anymore. But I do it anyway, and with more joy than ever: because now I am no longer forced to learn every nonsense word by word by word by heart, as was the case in my studies, but I can dedicate myself more to things "in my own way". So understanding, coherences, philosophical basics, implications, applications,.... That's good news.

If I were an examiner for medical microbiology, one question I would ask was: "Why dye bacteria with the Gram stain? For what reasons could such a colouration be useful for classification?" Anyone who knows me is aware that I do not expect textbook knowledge as an answer to this question, but independent thinking. One possible answer could be: "Because it seems to be a staining that is effective in about half of the bacteria and not in the other half, and it does not depend significantly on obvious criteria such as the shape of the cells whether the staining works or not".

If one does not aim at a concrete professional goal with his medical studies, but regards it as a pure education study, it is in principle a biology study at a high level with a focus on the human being and his illnesses. Not bad either.

Medizinische Mikrobiologie - Philosophische Grundlagen

Im Medizinstudium lernt man Dinge wie: Gingivitis (Zahnfleischentzündung) -> Verdacht auf koagulasenegative Streptokokken -> Behandlung: Gabe von bestimmten Antibiotika. Schön und gut, aber als Akademiker sollte man mehr sein als nur ein menschlicher Roboter, der einer solch mechanischen Denkweise folgt.

Viele Patienten gehen zum Arzt wegen der Symptome der Krankheit und erwarten, dass diese Symptome behandelt werden. Oft tun die Ärzte auch genau das, etwa wenn sie Cortison verschreiben, um Entzündungen zu hemmen. Da ist die Gabe von Antibiotika schon ein Fortschritt, weil dadurch ja die Krankheitsursache behandelt wird, nämlich die Bakterien.

Stellt sich nur die Frage, warum Bakterien überhaupt als Krankheitsursache betrachtet werden. Werden die Beschwerden, deretwegen der Patient den Arzt aufsucht, nicht durch das Immunsystem verursacht, das auf eine Bakterieninfektion mit Entzündung reagiert?

Nun, es muss schließlich einen Grund geben, warum das Immunsystem so reagiert. Wir kommen in ein philosophisches Terrain, das im Medizinstudium nicht behandelt wurde. Warum ist es schlecht, wenn im menschlichen Körper bestimmte Bakterienstämme leben? Geht es dem Immunsystem einfach darum, Zellen mit fremder DNA zu eliminieren? Oder greifen diese Bakterienstämme tatsächlich in den Stoffwechsel des Menschen ein und beeinträchtigen dadurch seine Funktion? Das sind Dinge, die noch weitgehend unerforscht sind! Nicht alle Bakterien werden als schädlich betrachtet, manche helfen auch bei der Verdauung und gelten daher als Kommensalen. Sie werden nach Stand der Wissenschaft vom Immunsystem in Ruhe gelassen. Von einigen Bakterien, die vom Immunsystem angegriffen werden, weiß man hingegen, dass sie Toxine absondern können. Das ist aber nicht von allen Bakterien dieser Arten bekannt. Das wäre also ein Bereich der Medizin, in dem noch vieles unerforscht ist.

Da ich nicht als Arzt arbeite, könnte man meinen, ich müsste mich nicht mehr mit Medizin beschäftigen. Ich tue es aber trotzdem, und das mit mehr Freude als je: Denn jetzt bin ich nicht mehr gezwungen, jeden Unsinn Wort für Wort auswendig zu lernen, wie das im Studium der Fall war, sondern kann mich den Dingen mehr auf "meine Weise" widmen. Also Verständnis, Zusammenhänge, philosophische Grundlagen, Implikationen, Anwendungen,... Das macht Freude.

Wäre ich Prüfer für Medizinische Mikrobiologie, so lautete eine Frage, die ich stellen würde: "Warum färbt man Bakterien mit der Gram-Färbung an? Aus welchen Gründen könnte eine solche Färbung der Klassifikation dienlich sein?" Wer mich kennt, weiß, dass ich als Antwort auf diese Frage nicht etwa Lehrbuchwissen erwarte, sondern selbstständiges Denken. Eine mögliche Antwort könnte sein: "Weil es sich offenbar um eine Färbung handelt, die bei etwa der Hälfte der Bakterien wirksam ist und bei der anderen Hälfte nicht, wobei es nicht signifikant von augenscheinlichen Kriterien wie der Form der Zellen abhängig ist, ob die Färbung greift oder nicht."

Wenn man mit seinem Medizinstudium kein konkretes Berufsziel anstrebt, sondern es als reines Bildungsstudium betrachtet, handelt es sich im Prinzip um ein Biologiestudium auf hohem Niveau mit Fokus auf dem Menschen und dessen Krankheiten. Auch nicht schlecht.

Montag, 27. November 2017

Identifying Constraints that Govern Cell Behavior

Covert et al. (2003): Identifying Constraints that Govern Cell Behavior: A Key to Converting Conceptual to Computational Models in Biology?

Cells "must abide" three types of constraints: "environmental", "physio-chemical" and "self-imposed" ones. Dealing with these constraints "has been helpful in converting conceptual models to computational models in biology", so the authors write in the abstract, and these models ought to be further refined.

What do the authors understand when they are talking about conceptual models on the one hand and computational models on the other? Quote:
Conceptual models describe a system in qualitative terms, whereas computational models can quantitatively simulate systemic properties to analyze, interpret, and predict cell behavior.
The reconstruction of "fairly complicated conceptual models of metabolic, regulatory, and signaling networks" has culminated "in the development of databases such as KEGG and MetaCyc", and the challenge now is to translate these models into "genome-scale computational models". According to the authors, the constraint-based approach may help achieve this goal:
In the constraint-based approach to analyzing metabolic networks, all possible behaviors of a system (e.g., flux distributions through the metabolic network) are considered[.] [...] By successively imposing constraints on conceptual models [...] the allowable range for each flux in the network is reduced dramatically. The problem of modeling complex biological systems shifts from experimental determination of kinetic and other fundamental parameters — as mentioned, currently an intractable problem — to continued identification of constraints that allow a more specific description of the system[.]
"Current constraint-based computational models have focused on microbial organisms" and are "at the genome-scale", which means they have "focused primarily on metabolism and associated transcriptional regulation, but are aimed at a complete representation of an organism and have already been used to simulate cell behavior under a variety of conditions" and "are instrumental in identifying and characterizing emergent properties of biological networks".

The next three chapters deal with the three types of constraints. The first is about external constraints:
External environments impose constraints on cells in terms of nutrients, physical factors, and neighboring influences. [...] Without adequate knowledge of the nutritional content of the external environment, significant constraints must be ignored or grossly approximated, resulting in incorrect or misleading predictions of cell behavior. [...] Physical characteristics of the external environment, such as temperature, pressure, pH, and exposure to light or water, can also limit possible cell behavior and survival. [...] The environmental conditions experienced by a cell generally change over time. [...] To account for such interactions in a model, the cellular community must therefore be accurately represented. [...] The intracellular environment of a cell also imposes constraints on cellular behavior, notably in terms of its internal components and the physical properties of its interior.
The second chapter is about physicochemical constraints:
Cells balance mass and energy, conform to the laws of thermodynamics and kinetics, and operate under limited enzyme turnover rates and activity of gene products. Physicochemical constraints are generally considered to be 'hard' constraints and are thought to remain unchanged. [...] Mass balance of reactions also imposes stoichiometric constraints on the network. [...] The requirement of mass balance exerts such a strong constraint on metabolic network function that flux balance analysis requires virtually only these constraints, with only a handful of strain-specific parameters, for detailed qualitative simulations. [...] The maximum throughput or enzyme capacity of biochemical reactions can also force the cell to exhibit more limited behaviors than otherwise. [...] The balance of osmotic pressure and maintenance of electroneutrality also impose constraints on cells.
The third chapter, finally, deals with self-imposed constraints:
Self-imposed constraints are different from other constraints because they respond to — and often change — internal or external environments. Unlike physicochemical constraints, they are time-dependent. Such adaptive constraints may entail regulation in the short term and evolution over longer time scales.
Moreover, this chapter lists two particular types of self-imposed constraints, namely "evolutionary constraints" and "regulatory constraints".

All of these constraints have been successfully applied to genome-scaled models:
As mentioned earlier, constraint-based approaches have enabled the development of genome-scale models of microorganisms. Thus far, the constraints that have been incorporated into genome-scale simulative models of metabolism, such as those that exist for E. coli, H. influenzae, H. pylori, and S. cerevisiae, have been stoichiometric, thermodynamic, enzyme capacity, and energy balance constraints. Transcriptional regulatory constraints have also recently been added to enable combined simulation of regulatory and metabolic networks.
How is this done in practice? The authors write:
A useful mathematical representation of all possible cell behaviors is one established geometrically as a solution space, which is effectively capped and reduced as constraints are incorporated. We are then left with a smaller solution space having general properties that can be studied, or in which certain points (i.e., cellular behaviors) may be examined in more detail.
Variants of this are "pathway analysis", "flux balance analysis", "energy balance analysis" and "regulatory flux balance analysis". The authors explain all of these variants, especially the last one.

In the appendix a couple of equations representing "certain physicochemical constraints in biology" are provided.

Samstag, 25. November 2017

The Inappropriately Excluded

Already some time ago, Michael Ferguson published a very good essay about the problems of those highly gifted people who have not been given a chance. I am one of these people because I work in the private sector, although I was actually aiming for a university career. I would like to say that I decided at the age of five that I would like to become a university professor. That was the main motivation for me why I tried (with great success) to be good at school, because I assumed that as a university professor you had to have a very good general education. The fact that I felt increasingly "fooled" as an adult and resigned myself in a way is not because I hadn't been aware from the outset that relationships and sympathy play a role in addition to the school grades. But it's because I expected that the professors who are at the helm at the moment would tick as much as I do and that's why I would be sympathetic to them - which in reality is not the case! It's really what Ferguson wrote (and Uwe Rohr said the same thing over and over again). Even if I remain quite modest and ignore those intelligence tests that have given me top scores of 160 or even 172, it can be assumed that my logical-analytical cognitive level is at least in the IQ-range between 140 and 150 (percentile 99.9) - a number of serious tests have confirmed this adequately. The professors at the universities and I are not strangers to each other because I could not keep up with them intellectually. It's because it's the other way around. That may sound arrogant, but it's true. For example, I have always believed that intellectuals tend towards an atheistic, or at least religiously critical, attitude. How astonished I was when I realized how influential Roman Catholic and other religious student fraternities still are when it comes to assigning positions at some universities. I can prove that most of those of my fellow students who have successfully completed their medical studies and received a job at the Medical University of Vienna have been members of the Austrian Cartellverband or another organization close to the Christian Democratic Party. This accumulation is not purely random, but statistically significant. But also at the Vienna University of Technology, where the situation is a little different and many professors are more likely to be close to the Social Democratic Party or the Greens, I have the impression that some professors are deliberately distancing themselves from me because they do not want me to see how little expertise they actually have, apart from their special field of study.

I am therefore actually a member of the social group that has so far been denied an outstanding position, even though we would be more than capable of doing so intellectually and morally. Michael Ferguson doesn't tell fairy tales, there are people like that. It's a pity that Uwe Rohr passed away before he found time to write his planned book entitled: "Are there people more talented than Nobel Prize winners?"

One might think that it would make sense to organise this group of people. For this purpose there would theoretically be the Austrian High Intelligence Association. In reality, however, this is unfortunately more than just unsuitable for improving the situation of those who are "inappropriately excluded" - on the contrary, membership in this association harms more than it benefits, because, as I have already written, there are currently people at the helm of this association who are unfamiliar with the above-mentioned problems because they have not even sniffed into the vicinity of a university career. They have completely different worries and, according to their own statements, would even prefer it if there were no more academics in the association.

Of course, those who deliberately put people like me at a disadvantage for fear that their own career might be endangered by us are laughing at each other's throats.

Die zu Unrecht Ausgeschlossenen

Michael Ferguson hat schon vor einiger Zeit einen sehr guten Aufsatz über die Problematik jener Hoch- und Höchstbegabten veröffentlicht, die nicht "zum Zug" gekommen sind. Ich gehöre zu diesen Leuten insofern, als ich in der Privatwirtschaft arbeite, obwohl ich eigentlich eine Hochschulkarriere angestrebt habe. Dazu ist zu sagen, dass ich schon im Alter von fünf Jahren beschlossen habe, dass ich gerne Universitätsprofessor werden möchte. Das war für mich die Hauptmotivation, warum ich mich (mit großem Erfolg) bemüht habe, in der Schule gut zu sein, denn ich nahm an, dass man als Universitätsprofessor über eine sehr gute Allgemeinbildung verfügen müsse. Dass ich mich als Erwachsener zunehmend "verarscht" gefühlt und in gewisser Weise resigniert habe, liegt nicht etwa daran, dass mir nicht von vornherein klar gewesen wäre, dass neben den Schulnoten auch "Vitamin B", also Beziehungen und Sympathie, eine Rolle spielen. Sondern es liegt daran, dass ich erwartet hatte, dass die Professoren, die derzeit am Ruder sind, ähnlich ticken wie ich und ich ihnen deswegen sympathisch sein würde - dies ist aber in Wirklichkeit nicht der Fall! Es ist wirklich so, wie Ferguson geschrieben hat (und Ähnliches hat auch Uwe Rohr immer wieder gesagt). Selbst wenn ich ganz bescheiden bleibe und jene Intelligenztests ignoriere, die mir Spitzenwerte von 160 oder gar 172 attestiert haben, so ist davon auszugehen, dass mein logisch-analytisches Kognitionsniveau mindestens im IQ-Bereich zwischen 140 und 150 gelegen ist (Prozentrang 99,9) - das haben zahlreiche seriöse Tests inzwischen in ausreichendem Maße bestätigt. Die Professoren an den Universitäten und ich sind einander nicht etwa fremd, weil ich mit ihnen intellektuell nicht mithalten könnte. Sondern weil es umgekehrt ist. Das mag arrogant klingen, aber es ist so. Beispielsweise habe ich immer geglaubt, dass Intellektuelle tendenziell eher zu einer atheistischen, zumindest aber religionskritischen Haltung neigen würden. Wie erstaunt war ich dann, als mir klar wurde, wie einflussreich hierzulande römisch-katholische und andere religiöse Studentenverbindungen nach wie vor bei der Postenvergabe an manchen Universitäten sind. Ich kann belegen, dass die meisten jener Studienkollegen von mir, die nach erfolgreich abgeschlossenem Medizinstudium eine Anstellung an der Medizinischen Universität Wien bekommen haben, Mitglied des Österreichischen Cartellverbands oder einer anderen der christdemokratischen Partei nahestehenden Organisation gewesen sind. Diese Häufung ist nicht rein zufällig, sondern statistisch signifikant. Aber auch an der Technischen Universität Wien, wo die Situation ein wenig anders ist und viele Professoren eher der sozialdemokratischen Partei oder den Grünen nahestehen dürften, habe ich den Eindruck gewonnen, dass manche Professoren bewusst auf Distanz zu meiner Person bedacht sind, weil sie nicht wollen, dass ich erkenne, wie wenig sie in Wahrheit fachlich auf dem Kasten haben, abgesehen von ihrem jeweiligen Spezialgebiet.

Ich gehöre somit tatsächlich jener gesellschaftlichen Gruppe an, der bisher eine herausragende Stellung verwehrt geblieben ist, obwohl ihre Angehörigen dafür geistig und moralisch mehr als geeignet wären. Michael Ferguson erzählt keine Märchen, es gibt solche Leute wirklich. Schade, dass Uwe Rohr verstorben ist, ehe er Zeit gefunden hat, sein geplantes Buch mit dem Titel: "Gibt es Personen, die begabter sind als Nobelpreisträger?" zu verfassen.

Man könnte meinen, es wäre sinnvoll, wenn sich diese Personengruppe organisierte. Zu diesem Zweck gäbe es ja theoretisch den österreichischen Hochintelligenzverein. Dieser ist in der Realität aber leider mehr als nur ungeeignet, die Situation der "zu Unrecht Ausgeschlossenen" zu verbessern - im Gegenteil, eine Mitgliedschaft in diesem Verein schadet mehr, als sie nützt, denn, wie ich schon geschrieben habe, sind derzeit in diesem Verein Leute am Ruder, denen die angesprochene Problematik fremd ist, weil sie nicht einmal in die Nähe einer Hochschulkarriere geschnuppert haben. Sie haben ganz andere Sorgen und hätten es nach eigenem Bekunden sogar lieber, wenn es im Verein gar keine Akademiker mehr gäbe.

Damit lachen sich freilich jene ins Fäustchen, die Leute wie mich aus Angst, ihre eigene Karriere könnte durch uns gefährdet werden, bewusst benachteiligen.

Freitag, 24. November 2017

The Importance of the Demoscene

Some time ago on Wikipedia someone asked the question if the demoscene as a whole was relevant for the Wikipedia. This question was raised in the course of a discussion about whether certain Wikipedia entries are justified. The argument that the things and persons concerned were of great importance to the demoscene was considered meaningless if the Wikipedia community came to the conclusion that the whole demoscene was not relevant to Wikipedia.

To this end, I must make it clear that the demoscene was important for the development of computer graphics, because until 1992 scene demos were technically superior to commercial computer games. Many effects which were previously only described in theoretical papers and of which only prototypes on large computer systems existed had been implemented by demosceners for the first time on home computers. Only the appearance of the computer game Wolfenstein 3D brought about a change in this situation, because the technique used there for depicting three-dimensional worlds had not previously been implemented by demosceners.

If the average age of active demosceners was usually about 18 years, it was the birth cohorts up to 1974 who actually wrote history. For younger demosceners, on the other hand, the development of demos was more of a hobby, although there were also some technical challenges to be mastered - but these competitions were held within the scene and not against the gaming industry.

In my opinion, however, the work of younger sceners is also worth mentioning, for example "The Product" by Farbrausch, with which procedural graphics were used for the first time in a 64k intro in the year 2000, which represented a revolution in this genre - with this technique it was possible to pack considerably larger animations in 64 kilobytes than ever before. This was also a technical innovation to which there was no counterpart in the games industry at that time.

Removing the demoscene from Wikipedia would in my opinion be an act of incredible ignorance.

Die Bedeutung der Demoszene

Auf Wikipedia hat jemand vor einiger Zeit die Frage gestellt, ob denn die Demoszene als Ganze überhaupt für die Wikipedia relevant sei. Diese Frage wurde im Zuge einer Diskussion darüber gestellt, ob bestimmte Wikipedia-Einträge gerechtfertigt seien. Denn das Argument, dass die betreffenden Dinge und Personen für die Demoszene von großer Bedeutung gewesen seien, sei bedeutungslos, sollte die Wikipedia-Gemeinde zu der Schlussfolgerung gelangen, dass die gesamte Demoszene für die Wikipedia nicht von Relevanz sei.

Dazu muss ich klar feststellen, dass die Demoszene für die Entwicklung der Computergrafik insofern von Bedeutung gewesen ist, als bis 1992 Szene-Demos kommerziellen Computerspielen technisch eindeutig überlegen waren. Viele Effekte, die zuvor nur in theoretischen Papers beschrieben waren und von denen allenfalls Prototypen auf Großrechenanlagen existierten, waren von Demoszenern zum ersten Mal auf Heimcomputern umgesetzt worden. Erst das Erscheinen des Computerspiels Wolfenstein 3D führte eine Änderung dieser Situation herbei, weil die dort verwendete Technik zur Darstellung dreidimensionaler Welten nicht zuvor von Demoszenern umgesetzt worden war.

Wenn das durchschnittliche Alter von aktiven Demoszenern in der Regel bei ungefähr 18 Jahren lag, so handelte es sich also um die Geburtenjahrgänge bis 1974, die tatsächlich Geschichte geschrieben haben. Für jüngere Demoszener stellte die Entwicklung von Demos hingegen eher ein Hobby dar, obwohl es durchaus auch technische Herausforderungen gab, die zu meistern waren - aber diese Wettkämpfe wurden dann innerhalb der Szene und nicht gegen die Spiele-Industrie ausgetragen.

Meines Erachtens ist aber auch das Werk jüngerer Szener erwähnenswert, man denke etwa an "The Product" von Farbrausch, mit dem im Jahr 2000 erstmals prozedurale Grafiken in einem 64k-Intro verwendet wurden, was eine Revolution dieses Genre darstellte - mit dieser Technik war es möglich, wesentlich umfangreichere Animationen in 64 Kilobyte zu packen als bisher. Auch das war eine technische Neuerung, zu der es damals in der Spiele-Industrie kein Pendant gab.

Die Demoszene aus der Wikipedia zu entfernen, würde in meinen Augen einen Akt unglaublicher Ignoranz darstellen.

Why intelligent people tend to piss off others

Austria may not be as liberal as I would like to be. But it is, after all, a European country. This is the best place to say what you think without fear of serious consequences.

If someone is really intelligent, it means that he or she is able to develop his or her own thought processes instead of just chattering about what he or she has been told. This ability is a thorn in the side of many a powerful man. In school I always had the problem that I had to assess which of my ideas were politically correct or socially acceptable enough to express them in class or write them down in an essay. Such a thing inhibits.

If an intelligent person restricts himself to giving only what he has learned from his parents or teachers as acceptable, he will sooner or later become unhappy.

A truly intelligent person will always piss of others when he fulfils his potential. That's his fate.

Warum intelligente Menschen anecken

Österreich mag vielleicht nicht so liberal sein, wie ich das gern hätte. Aber es ist immerhin ein europäisches Land. Hier hat man noch am ehesten die Möglichkeit zu sagen, was man sich denkt, ohne ernsthafte Konsequenzen zu fürchten.

Wenn jemand wirklich intelligent ist, bedeutet das, dass er in der Lage ist, eigene Gedankengänge zu entwickeln, anstatt nur nachzuplappern, was ihm jemand vorgesagt hat. Genau diese Fähigkeit ist vielen Mächtigen ein Dorn im Auge. Ich hatte in der Schule immer das Problem, dass ich abschätzen musste, welche meiner Ideen politisch korrekt bzw. gesellschaftlich akzeptabel genug waren, um sie im Unterricht zu äußern oder in einem Aufsatz niederzuschreiben. So etwas hemmt.

Wenn sich ein intelligenter Mensch darauf beschränkt, immer nur das von sich zu geben, was er von seinen Eltern oder Lehrern als akzeptabel zu betrachten gelernt hat, wird er über kurz oder lang unglücklich.

Ein wahrhaft intelligenter Mensch wird also in jedem Fall anecken, wenn er sein Potenzial erfüllt. Das ist sein Schicksal.

Computation and Computational Thinking

Aho (2012): Computation and Computational Thinking

The motivation for this paper is the "confusion" that is generated by "[u]sing the term 'computation' without qualification", which is why the author suggests "to use the term in conjunction with a well-defined model of computation whose semantics is clear and which matches the problem being investigated".
We consider computational thinking to be the thought processes involved in formulating problems so their solutions can be represented as computational steps and algorithms. An important part of this process is finding appropriate models of computation with which to formulate the problem and derive its solutions. A familiar example would be the use of finite automata to solve string pattern matching problems. [...] However, as the computer systems we wish to build become more complex and as we apply computer science abstractions to new problem domains, we discover that we do not always have the appropriate models to devise solutions. In these cases, computational thinking becomes a research activity that includes inventing appropriate new models of computation.
This is related to systems biology because:
Corrado Priami and his colleagues at the Centre for Computational and Systems Biology in Trento, Italy have been using process calculi as a model of computation to create programming languages to simulate biological processes. Priami states “the basic feature of computational thinking is abstraction of reality in such a way that the neglected details in the model make it executable by a machine.”
Also, the author writes:
[T]here is increasing interest in applying computation to studying virtually all areas of human endeavor. One fascinating example is simulating the highly parallel biological processes found in human cells and organs for the purposes of understanding disease and drug design. Good computational models for biological processes are still in their infancy. And it is not clear we will ever be able to find a computational model for the human brain that would account for emergent phenomena such as consciousness or intelligence.
A chapter on the "theory of computation", which is "one of the core areas of computer science" and "explores the fundamental capabilities and limitations of models of computation", follows.
A model of computation is a mathematical abstraction of a computing system. The most important model of sequential computation studied in computer science is the Turing machine[.]
The author then explains in detail how a Turing machine works and justifies this with the following words:
The reason we went through this explanation is to point out how much detail is involved in precisely defining the term computation for the Turing machine, one of the simplest models of computation. It is not surprising, then, as we move to more complex models, the amount of effort needed to precisely formulate computation in terms of those models grows substantially.
Next, the author writes about "reactive systems". A reactive system is a system "that maintains an ongoing interaction with its environment", such as an operating system, an embedded system - or a biological system:
Perhaps the most intriguing examples of reactive distributed computing systems are biological systems such as cells and organisms. We could even consider the human brain to be a biological computing system. Formulation of appropriate models of computation for understanding biological processes is a formidable scientific challenge in the intersection of biology and computer science.
Without providing really concrete examples, the author further states that "[i]n addition to aiding education and understanding, there are many practical benefits to having appropriate models of computation for the systems we are trying to build", and concludes:
Useful models of computation for solving problems arising in sequential computation can range from simple finite-state machines to Turing-complete models such as random access machines. Useful models of concurrent computation for solving problems arising in the design and analysis of complex distributed systems are still a subject of current research.
A somewhat vague article that leaves many open questions.

Computational and evolutionary aspects of language

Nowak et al. (2002): Computational and evolutionary aspects of language

This paper starts with a highly interesting abstract, which makes the reader curious about what is to come:
Language is our legacy. It is the main evolutionary contribution of humans, and perhaps the most interesting trait that has emerged in the past 500 million years. Understanding how darwinian evolution gives rise to human language requires the integration of formal language theory, learning theory and evolutionary dynamics. Formal language theory provides a mathematical description of language and grammar.
According to the authors, the genetic code was a "generative system", and until "very recently", it was the only one, which changed when human language emerged.
It enables us to transfer unlimited non-genetic information among individuals, and it gives rise to cultural evolution.
What is the aim of this paper? The authors list several ones:
Currently there are many efforts to bring linguistic inquiry into contact with several areas of biology including evolution, genetics, neurobiology and animal behaviour. The aim of this Review is to formulate a synthesis of formal language theory, learning theory and evolutionary dynamics in a manner that is useful for people from various disciplines. We will address the following questions: What is language? What is grammar? What is learning? How does a child learn language? What is the difference between learning language and learning other generative systems? In what sense is there a logical necessity for genetically determined components of human language, such as ‘universal grammar’? Finally, we will discuss how formal language theory and learning theory can be extended to study language as a biological phenomenon, as a product of evolution.
The authors start off by writing about formal language theory, stating that there is a "fundamental aspect of human language that makes it amenable to formal analysis: linguistic structures consist of smaller units that are grouped together according to certain rules" and that "[i]ndividual languages have specific rules". What follows is the classical formal language theory as taught at universities (e.g. in courses on "Theoretical Computer Science"), including the Chomsky hierarchy. You may also read an introductory article which I once wrote about this topic.

Afterwards, a chapter on learning theory follows, in which the authors state:
Learning is inductive inference. The learner is presented with data and has to infer the rules that generate these data. [...] Neural networks are an important tool for modelling the neural mechanisms of language acquisition. The results of learning theory also apply to neural networks: no neural network can learn an unrestricted set of languages.
Then, the topic is evolutionary language theory. Among other things, the authors write in this chapter:
The central question of the origin of human language is which genetic modifications led to changes in brain structures that were decisive for human language. Given the enormous complexity of this trait, we should expect several incremental steps guided by natural selection. In this process, evolution will have reused cognitive features that evolved long ago and for other purposes. Understanding language evolution requires a theoretical framework explaining how darwinian dynamics lead to fundamental properties of human language such as arbitrary signs, lexicons, syntax and grammar.
These basic statements are followed by a couple of mathematical formulae which are supposed to illustrate evolutionary dynamics.

The article also contains an excursion on statistical learning theory.

In the conclusions chapter, the authors end the article with the following words:
The study of language as a biological phenomenon will bring together people from many disciplines including linguistics, cognitive science, psychology, genetics, animal behaviour, evolutionary biology, neurobiology and computer science. Fortunately we have language to talk to each other.

Computational approaches to cellular rhythms

Goldbeter (2002): Computational approaches to cellular rhythms

The authors suggest that "mathematical models and numerical simulations are needed to fully grasp the molecular mechanisms and functions of biological rhythms" because of "the large number of variables involved and of the complexity of feedback processes that generate oscillations". Also, they write that models are "necessary to comprehend the transition from simple to complex oscillatory behaviour and to delineate the conditions under which they arise".

Historically, "theoretical models for biological rhythms were first used in ecology to study the oscillations resulting from interactions between populations of predators and prey", and "[n]eural rhythms represent another field where such models were used at an early stage: the formalism developed by Hodgkin and Huxley still forms the core of most models for oscillations of the membrane potential in nerve and cardiac cells".

This review focuses on "oscillations of intracellular calcium, pulsatile signalling in intercellular communication, and circadian rhythms", and, in addition, the author describes "how computational biology can help in understanding the transition from simple periodic behaviour to complex oscillations including bursting and chaos".

The author starts his paper by explaining the phenomena of steady state and limit cycle:
In the course of time, open systems that exchange matter and energy with their environment generally reach a stable steady state. However, as shown by Glansdorff and Prigogine, once the system operates sufficiently far from equilibrium and when its kinetics acquire a nonlinear nature, the steady state may become unstable. Feedback processes and cooperativity are two main sources of nonlinearity that favour the occurrence of instabilities in biological systems. When the steady state becomes unstable, the system moves away from it, often bursting into sustained oscillations around the unstable steady state. In the phase space defined by the system’s variables (for example, the concentrations of the biochemical species that are involved in the oscillatory mechanism), sustained oscillations correspond to the evolution towards a closed curve — the limit cycle. [...] Limit-cycle oscillations thus represent an example of non-equilibrium self-organization and can therefore be viewed as temporal dissipative structures. The oscillations are characterized by their amplitude and by their period.
It is also possible that a system has multiple steady states, and the most common case of this is called "bistability".
When spatial inhomogeneities develop, instabilities may lead to the emergence of spatial or spatiotemporal dissipative structures. These can take the form of propagating concentration waves, which are closely related to oscillations.
What is the step by step approach used by computational biologists to describe the molecular mechanism of a biological rhythm? The author lists five steps:
First, the key variables of the phenomenon are identified, together with the nature of their interactions that form the relevant feedback loops. Second, differential equations describing the time evolution of the system are constructed. In spatially homogeneous conditions, these take the form of ordinary differential equations, whereas in the presence of diffusion, partial differential equations are used to describe the system’s spatiotemporal evolution. Third, the steady state(s) admitted by these equations are determined analytically or by numerical integration. The fourth step probes the stability properties of the steady state(s). This is generally done by using linear stability analysis. [...] Using this approach, the fifth step is to determine the domains of occurrence of sustained oscillations in parameter space.
In the next chapters, the author discusses various deterministic models for cellular rhythms. Among many other things, he writes about circadian rhythms in continuous darkness:
Computational biology can provide surprisingly counterintuitive insights. A case in point is the puzzling observation that circadian rhythms in continuous darkness can sometimes be suppressed by a single pulse of light and restored by a second such pulse. Winfree proposed the first theoretical explanation for this long-term suppression. He hypothesized that the limit cycle in each oscillating cell surrounds an unstable steady state. The light pulse would act as a critical perturbation that would bring the clock to the singularity, that is, the steady state. Because the steady state is unstable, each cell would eventually return to the limit cycle, but the population would be spread out over the entire cycle so that the cells would be desynchronized and no global rhythm would be seen. An alternative explanation is based on the coexistence of sustained oscillations with a stable steady state.
Finally, in the start of the conclusion chapter, the author writes:
Given the rapid accumulation of new data on gene, protein and cellular networks, it is increasingly clear that computational biology will be crucial in making sense of the puzzle of cellular regulatory interactions. Models and simulations are particularly valuable for exploring the dynamic phenomena associated with these regulations. Such an approach has long been applied to the study of biological rhythms, from the periodic activity of nerve and cardiac cells to population oscillations in ecology. [...] At the genetic level, models show that regulatory interactions between genes can result in multiple steady states or oscillations.
Moreover, he describes "two main approaches followed in computational biology":
The first is based on minimal models — a complex system is decomposed into simpler modules, each of which can be modelled by simple equations. Once these are understood, they are assembled into increasingly complex networks that can exhibit collective properties not apparent in the modules’ behaviour. The second relies on large-scale models that aim at incorporating from the outset all known details about the variables and processes of interest. This approach may someday lead to the construction of an electronic cell in silico, although that day remains far off. With models as with maps, I believe that an intermediate scale will often prove most fruitful.
In the end, he writes:
Models for cellular rhythms illustrate the roles and advantages of computational biology. First and foremost, modelling takes over when pure intuition reaches its limits. This situation commonly arises when studying cellular processes that involve a large number of variables coupled through multiple regulatory interactions. Here one cannot make reliable predictions on the basis of verbal reasoning. But mathematical models can show the precise parameter ranges that give rise to sustained oscillations. Models also help clarify the molecular mechanisms of these oscillations. Indeed, simulations allow rapid determination of the qualitative and quantitative effects of each parameter, and thereby can help to identify key parameters that have the most profound effect on the system’s dynamics. Testing various models permits swift exploration of different mechanisms over a large range of conditions. One of the main roles of models will be to provide a unified conceptual framework to account for experimental observations and to generate testable predictions.

Biological Networks: The Tinkerer as an Engineer

Alon (2003): Biological Networks: The Tinkerer as an Engineer

This opinionated paper "highlights the surprising discovery of 'good-engineering' principles in biochemical circuitry that evolved by random tinkering". In the introductory paragraph, the author writes:
Francois Jacob pictured evolution as a tinkerer, not an engineer. Engineers and tinkerers arrive at their solutions by very different routes. Rather than planning structures in advance and drawing up blueprints (as an engineer would), evolution as a tinkerer works with odds and ends, assembling interactions until they are good enough to work. It is therefore wondrous that the solutions found by evolution have much in common with good engineering design.
Modeling biological systems as networks (with nodes and arrows) brings two advantages:
First, the network description allows application of tools and concepts developed in fields such as graph theory, physics, and sociology that have dealt with network problems before. Second, biological systems viewed as networks can readily be compared with engineering systems, which are traditionally described by networks such as flow charts and blueprints.
Biological networks share three structural principles with engineered networks: "modularity, robustness to component tolerances, and use of recurring circuit elements". The following paragraphs elaborate on these three principles.

Regarding the first principle, modularity, the author compares protein pathways and complexes with subroutines in software and defines:
A module in a network is a set of nodes that have strong interactions and a common function. A module has defined input nodes and output nodes that control the interactions with the rest of the network. A module also has internal nodes that do not significantly interact with nodes outside the module.
Why is there modularity in biology? The author suggests the following reason:
A clue to the reason that modules evolve in biology can be found in engineering. Modules in engineering convey an advantage in situations where the design specifications change from time to time. New devices or software can be easily constructed from existing, well-tested modules. A nonmodular device, in which every component is optimally linked to every other component, is effectively frozen and cannot evolve to meet new optimization conditions. Similarly, modular biological networks may have an advantage over nonmodular networks in real-life ecologies, which change over time: Modular networks can be readily reconfigured to adapt to new conditions.
Regarding the second principle, robustness, the author writes:
In both engineering and biology, the design must work under all plausible insults and interferences that come with the inherent properties of the components and the environment. [...] The fact that a gene circuit must be robust to such perturbations imposes severe constraints on its design: Only a small percentage of the possible circuits that perform a given function can perform it robustly.
Regarding the third principle, the use of recurring circuit elements, the author writes:
Metabolic networks use regulatory circuits such as feedback inhibition in many different pathways. It is important to stress that the similarity in circuit structure does not necessarily stem from circuit duplication. Evolution, by constant tinkering, appears to converge again and again on these circuit patterns in different nonhomologous systems[.]
Finally, the author poses the whether whether "a complete description of the biological networks of an entire cell [will] ever be available" and provides the following answer:
The task of mapping an unknown network is known as reverse-engineering. [...] Reverse engineering a nonmodular network of a few thousand components and their nonlinear interactions is impossible (exponentially hard with the number of nodes). However, the special features of biological networks discussed here give hope that biological networks are structures that human beings can understand. [...] These concepts, together with the current technological revolution in biology, may eventually allow characterization and understanding of cell-wide networks, with great benefit to medicine. The similarity between the creations of tinkerer and engineer also raises a fundamental scientific challenge: understanding the laws of nature that unite evolved and designed systems.

Computational methods for the prediction of protein interactions

Valencia et al. (2002): Computational methods for the prediction of protein interactions

This paper briefly describes "the five computational techniques available for the prediction of interaction partners": "presence or absence of genes in related species", "conservation of gene neighborhood", "gene fusion evenets"; "similarity of phylogenetic trees (mirrortree)" and "in silico two-hybrid method". The authors "examine their range of applicability" and "analyze new trends in the determination of interacting surfaces on the basis of sequence information". A chapter that compares these methods with each other follows:
Unfortunately, a definitive evaluation of any of these methods cannot yet be undertaken, because the availability of collections of interacting proteins is still highly limited. [...] Complementary to these efforts [to develop databases of protein interactions and to establish standards for the exchange of information between these databases], various data-mining procedures are emerging for the automatic extraction of information about protein interactions from the vast amount of accumulated bibliographic information.
Then, a chapter about the "prediction of the molecular basis of protein interaction" follows:
[A] set of new computational methods can now address the problem of the prediction of interacting surfaces in the absence of complete information about the corresponding structures of the binding proteins. Initial approaches have been based on the observed properties of the statistical composition of interacting surfaces in terms of residue types (polarity, charge, etc.) and on the structure of the surfaces. [...] A second type of method addresses the prediction of interacting residues in the absence of structural information. The first reported application determines the distribution of positions that show family-dependent patterns of conservation in MSAs (‘tree determinants’). [...] A promising alternative to that described above is the use of information about correlated mutations in order to highlight the interaction sites in binding proteins.
The authors conclude:
The combination of experimental and theoretical data could, for the first time, provide complete information about interaction networks, thereby allowing studies to be undertaken of the distribution and number of interactions, the presence of key nodes in the networks, tolerance to perturbations and differences in network organization from one organism to another.

Applying computational modeling to drug discovery and development

Kumar et al. (2006): Applying computational modeling to drug discovery and development

This paper discusses "pharmaceutically-relevant computational modeling approaches currently used as predictive tools" and provides examples that "demonstrate how companies can employ these computational models to improve the efficiency of transforming targets into therapies".

The authors "have identified three areas where computational modeling has potential to substantially impact efficiency and development":
The first area is cell-signal behavior, where the application of models characterizes how lead compounds affect intracellular signaling. The second area is signal-response behavior, where models predict cellular phenotype from signaling information. The third area is physiology, in which models are used to simulate clinical outcomes. Each class of model can help identify new drug targets.
To demonstrate the interplay between traditional biology and high-throughout informatics, the authors provide the following example:
[T]he construction of a signaling model begins with an assembly of molecular interactions, rate parameters and spatial restrictions. Informatics groups analyze high-throughput datasets (ie. gene–chip arrays, gene sequencing results, mass spectrometry results and yeast two-hybrid results) using methods like clustering or spacing alignments, and integrate results with data from other in-house biological experiments and from literature (obtained by text mining). The data are then further organized into ontologies. A model is constructed from a subset of these data and is then validated using traditional biology experiments. If the model captures experimental trends, it is used to generate predictions or hypotheses that suggest new biological experiments. The results of these experiments either further validate the model or identify novel biology that is then incorporated into the model. This interplay between informatics, modeling and traditional biology enables the focused use of large datasets to solve biologically relevant problems.
The next chapters deal with the three aforementioned areas - cell signaling models, signal-response models and physiological models.

The motivation for dealing with cell signaling models is explained by the authors with the following words:
Defects in signal transduction underlie many diseases that are of interest to pharmaceutical companies. For example, dysregulation of conserved protein tyrosine kinase pathways leads to a variety of cancers. Individual signaling proteins inside the cell are often the target of small-molecule drugs, whereas many antibody drugs target the receptors controlling signaling cascades.
How are such models constructed? The authors explain:
Typically, ordinary differential equations (ODEs) are used to describe mass-action kinetics and system behavior. Experimental measurement of reaction rates, concentrations, molecular interactions and trafficking parameters are essential for the construction of such models. The level of detail necessary varies from system to system, but many signal-transduction pathways can be modeled using a combination of measured values, fitted parameters, and coarse-grained descriptions of interactions.
According to the authors, "[m]odels that describe signaling pathways are important in pharmaceutical research for three main reasons":
(i) they often capture nonintuitive signal behavior and identify novel molecular function; (ii) they allow researchers to experiment in silico across a wide range of conditions (e.g. receptor numbers, ligand concentrations and phosphorylation rates), thus saving experimental resources and identifying important further experiments; and (iii) they serve as a database for much of the known information about a particular pathway.
As examples, the authors cite papers which describe the modeling of the Wnt pathway and the ErbB receptor.

The chapter on signal-response models starts with an interesting remark:
Interestingly, it has been hypothesized that no more than 20 signal transduction cascades control the seemingly endless list of cell behaviors observed in humans.
They further elaborate on this and come to the conclusion:
[T]o correct aberrant cellular behavior with drugs requires quantitative knowledge about multiple signaling proteins (that is, multivariate datasets). Multivariate datasets can then be used to understand cellular decision-making processes in the context of computational models. [...] Whereas ODE-level models are becoming more prevalent for describing signaling pathways, there are very few models that can accurately connect signaling pathways to cellular behavior at this level of mathematical description. The problem, therefore, requires the use of more abstracted signaling models. Abstracted models identify statistical relationships between signals and behavior, which suggest causal signal–behavior relationships that can be further probed using molecular biology or genetic approaches.
As an example, a paper is cited which is about an investigation of "the molecular effects of an acute promyelocytic leukemia cell line treated with retinoic acid and arsenic trixoide", trying to answer the question how "downstream signaling events coordinate a known program of differentiation and apoptosis". Furthermore, the authors mention a "procedure based on linear modeling (partial least squares regression), whereby 8000 intracellular signals were correlated with more than 1000 apoptosis-related cellular responses".

In the final chapter on physiology, Noble's model of the human heart is mentioned.

The authors conclude:
Computational models address a key issue in the pharmaceutical industry: prediction. [...] The in silico component in research must still be coupled with hypothesis-driven experimental design and is not a substitute for the more important in cerebro component. [...] We believe that the most successful models will not only provide predictive power but will also be scalable, meaning that models currently appropriate for different phases in the R&D pipeline should be mutually compatible in anticipation of information that will connect disparate R&D stages.

Molecular eco-systems biology: towards an understanding of community function

Raes et al. (2008): Molecular eco-systems biology: towards an understanding of community function

This paper discusses "the necessary data types that are required to unite molecular microbiology and ecology to develop an understanding of community function" as well as "the potential shortcomings of these approaches".

What is a microbial ecosystem? The authors define:
A microbial ecosystem can be defined as a system that consists of all the microorganisms that live in a certain area or niche and that function together in the context of the other biotic (plants and animals) and abiotic (temperature, chemical composition and structure of the surroundings) factors of that niche. Communities range from being simple (for example, one- or two-species-dominated bioreactors and biofilms that are growing on ore-mine effluents or medical implants) to complex (for example, symbiotic human gut flora, plant rhizospheres, soil communities and ocean dwelling or even airborne microorganisms, such as those present in clouds). The complexity of the interactions in ecosystems depends on the number of species and the population structure, variation in food and energy supply and the geography of the habitat.
Why can computational systems biology help us study microbial ecosystems? The authors give the following answer:
Important issues that could be addressed by an ecosystems approach include estimating the relative importance of ecosystem members in ecosystem functioning and productivity, the effect of nutrient availability on species composition or the resilience of the ecosystem to disturbances.
Three aspects are important: "the 'parts list'; the connectivity between the parts; and the placement of connectivity in the context of time and space". About these three aspects, the authors write:
In single-organism systems biology, the parts list is generally established; almost 700 complete bacterial and archaeal genomes are available and some functional knowledge is available for approximately 70–80% of the encoded genes. For several model organisms, large-scale efforts have determined the connectivity among the parts (the physical and genetic interactions between genes). This, together with an ever increasing amount of temporal, spatial and structural data, means that model microorganism systems biology is ready to enter the third phase and progress towards its final goal — the modelling and manipulation of complete organisms.
The paper further lists a lot of metagenomic approaches to obtaining the parts list such as environmental shotgun sequencing. Regarding the amount of data this has brought us, the authors write:
Metagenomic sequencing has so far added more than 10 billion bp to sequence databases. The larger projects usually sequence approximately 50–100 Mb per environment, which should provide a firm foundation to start investigating the functioning of the underlying communities. [...] [F]or most metagenomic samples, up to 75% of genes can be functionally characterized using targeted computational methodologies that combine homology and gene neighbourhood, and in simple communities, genes can be assigned to species (because complete genomes can be assembled), which means a parts list — the proteins, their function and their host organism — can be established.
The second aspect, connectivity, turns out to be far more complex in ecosystems than in single organisms:
In cellular systems, connectivity refers to protein–protein interactions and modifications (such as phosphorylation), substrate and end-product transfer and regulatory interactions. In ecosystems, this concept encompasses an even wider range of interactions at various levels. These include ecological interactions between the carriers of function (organisms), such as competition, predation and structural interactions (such as mat formation).
"Data sources to probe connectivity" include "metabolic cooperation" and "cell-cell signalling - communication and quorum sensing". About each of these two topics the paper features one paragraph. Moreover, there is a chapter about "spatial and temporal variation".

The authors conclude:
Many datasets that will facilitate ecosystems biology are now being gathered. Metagenomics studies are collating the parts lists from which some general ecosystem properties, as well as first insights into metabolic cooperation, can be extracted. Other technologies that will gather additional, complementary data types, such as the environmental counterpart of high-throughput functional genomics (a cornerstone of cellular systems biology), are still in their infancy. However, technologies such as large-scale automated monitoring of chemicals and meta-metabolomics are developing rapidly.

Donnerstag, 23. November 2017

Puppet Dance

The rulers have notions of what their subjects should be like. The subjects must adapt to these ideas, otherwise the rulers will be inclined to let them starve.

In the sense of employability, adolescent subjects try to adapt to the actual or supposed demands of working life. For years, for decades. Until they come to the conclusion that they have spent their whole lives following the notion that they have to adapt to the ideas of people who do what they want to do themselves and change their opinion constantly, that is to say that they (the rulers) deserve at least a strong kick in the buttocks.

Fooled by life - that's the conclusion to which those come who used to think they have to learn as much as possible, even though the employers actually want cheap labour that can do exactly what they need, no more and no less, and demand as little pay as possible.

The main problem is insecurity, which exists because most employees do not know exactly what their future superiors actually expect from them. Marionettes who dance without being guided by a puppet master. Self-organizing anthill.

If you give a damp rubbish on all this and only do and leave what pleases you, you are doing the right thing.


Die Herrschenden haben Vorstellungen, wie ihre Untertanen zu sein haben. Die Untertanen müssen sich gefälligst an diese Vorstellungen anpassen, sonst werden die Herrschenden dazu geneigt sein, sie verhungern zu lassen.

Im Sinne einer "employability" bemühen sich heranwachsende Untertanen, sich an die tatsächlichen oder vermeintlichen Anforderungen des Berufslebens anzupassen. Jahrelang, jahrzehntelang. Bis sie irgendwann darauf kommen, dass sie ihr ganzes Leben damit vergedeudet haben, sich an irgendwelche Vorstellungen von Leuten anzupassen, die selbst machen, was sie wollen, und ihre Meinung ständig ändern, also im Grunde genommen mindestens einen kräftigen Tritt in den Allerwertesten verdient hätten.

Vom Leben verarscht - so fühlen sich jene, die dann darauf kommen, dass sie überqualifiziert sind, weil sie geglaubt haben, möglichst viel lernen zu müssen, obwohl die Arbeitgeber eigentlich billige Arbeitskräfte haben wollen, die genau das können, wozu sie gebraucht werden, nicht mehr und nicht weniger, und dafür möglichst wenig Entlohnung verlangen.

Das Hauptproblem ist die Unsicherheit, die deswegen besteht, weil die meisten Arbeitnehmer in spe gar nicht genau wissen, was ihre künftigen Vorgesetzten von ihnen eigentlich erwarten. Marionetten, die tanzen, ohne von einem Marionettenspieler angeleitet zu werden. Sich selbst organisierende Ameisenhaufen.

Wer auf all das einen feuchten Kehricht gibt und nur tut und lässt, was ihm selbst behagt, handelt genau richtig.

Montag, 20. November 2017

Die Studenten von heute haben es nicht leicht...

Die Medien schreiben heute:
Die Universitätenkonferenz (uniko) plädiert für Änderungen im Studienrecht. Unter anderem sollen die Zahl der Prüfungsantritte auf maximal zwei reduziert und Konsequenzen für jahrelange Prüfungsinaktivität bis zur Exmatrikulation eingeführt werden, so uniko-Präsident Oliver Vitouch heute bei einer Pressekonferenz. [...] In keinem anderen Hochschulsystem der Welt gebe es Vitouch zufolge bei Prüfungen bis zu vier Wiederholungsmöglichkeiten, könne man jahrelang keine Prüfung absolvieren oder beliebig viele Studien inskribieren.
Aus meiner Sicht stellt sich bei Dingen, die eine Verschlechterung der Situation darstellen, wie es bei diesen geforderten Maßnahmen der Fall ist, immer die Frage, worin die Vorteile liegen. Ich kann keine erkennen. Man setzt nur die Studenten noch mehr unter Druck und schafft diejenigen Vorzüge ab, die das System in Österreich gegenüber anderen Ländern derzeit aufzuweisen hat.

Zu meiner Zeit bestand Druck von zwei Seiten: Einerseits wollten die Politiker, dass möglichst viele studieren und ihr Studium abschließen. Immerhin sollte die Akademikerquote steigen. Andererseits haben sich die Professoren immer bemüht, den Studenten möglichst viele Steine in den Weg zu legen. Man könnte meinen, die Professoren hätten sich gegen die Politik gestellt. Und das, obwohl sie selbst Staatsbeamte sind.

Wenn unsere Professoren wirklich so große Intellektuelle wären, warum hört und liest man dann so wenig von ihnen, abgesehen von Meldungen wie der oben zitierten? Mir scheint es, als ginge es ihnen nur darum, ein schönes Leben auf Kosten der jungen Menschen zu führen und über sie Macht auszuüben.

Wer jahrelang zu keiner Prüfung antritt, ist meist berufstätig. Warum sollte so jemand nicht die Möglichkeit haben, irgendwann sein Studium doch abzuschließen? Man denke nur an unseren Außenminister und zukünftigen Bundeskanzler, dem für seinen Studienabschluss noch zwei Prüfungen fehlen.

Eine beliebig hohe Zahl an Studiengängen zu belegen, ist ebenfalls eine gute Sache. Es fördert die Allgemeinbildung. Nur im Interesse besonders niederträchtiger Menschen kann es liegen, lediglich Spezialisten auszubilden.

Wenn wenigstens die Lehre an unseren Unis ideal wäre, könnte man darüber reden, auch die Studierenden mehr zu fordern. Aber ich erinnere daran, dass etwa im Medizinstudium einzelne Pflichtvorlesungen gar nicht erst gehalten wurden, obwohl die Professoren dazu per Gesetz (!) verpflichtet gewesen wären. Man musste sich alles selbst aus den Büchern beibringen. Unter solchen Bedingungen habe ich studiert!

Kein Wunder, dass sich auch im Berufsleben immer wieder zeigt, dass ich mehr von demjenigen Wissen profitiere, das ich mir als Autodidakt angeeignet habe, als von dem, was mir an der Uni beigebracht worden ist.

Mittwoch, 15. November 2017

Support for the highly gifted and social reality

Together with Uwe Rohr, I published an article on the topic of "Post-school education for the most gifted" in the magazine of the Austrian High-Intelligence Association, issue 372. As I have noticed now, Uwe has made another article about the same problem in issue 376, which was the first issue after I left the club, so I didn't get it delivered by mail.

Uwe was convinced that it would be good for society as a whole if the most gifted people were encouraged and enabled to gain responsible positions in which they could provide services for the common good. Taking the financial crisis as an example, it would have been clearly seen that the economists at the German universities were not the brightest minds of their profession and that this caused great damage to the German people.

Uwe also believed that gifted students were being actively disadvantaged and that the reality was that gifted students were not very welcome at universities. They would be perceived primarily as competitors who would challenge the established ones in their positions. So the reality is quite different than what Uwe longed for.

I have to say that I myself have always believed that the university would be the ideal employer for the most gifted students. During my studies, however, I got the impression that there was not too much interest on the part of the university staff in recruiting and supporting talented students as employees. It seemed to me that it was more like a favoritism - not the most gifted, but those who somehow had a close relationship with the university or particularly influential and powerful parents were supported. I researched once who of my fellow students had all been employed at the university after their studies, and most of them were those who had been involved in the Cartellverband or another preliminary organization of the Christian Democratic Party during their studies. Every time I meet a university employee, I also research which school he or she had graduated from. Surprisingly, I haven't found a single graduate of the Popperschule. The Popperschule is a school for highly gifted students, and the achievements in Olympic competitions - whether maths, computer science, physics, Latin or philosophy - regularly prove that these people can actually do more than just solve intelligence test tasks. I contacted a former teacher of mine who is now working at the Popperschule. He said succinctly that it also depends on "the social" - meaning which family you come from, who you count as your acquaintance, which party book you have, etc. I felt a little bit of a laugh when he said that.

In the meantime, I have become accustomed to the fact that in our society there is no longer any general social solidarity, but that everyone only cares about the interests of themselves and their families. Of course, I realize that this will forgive many opportunities. But most people don't seem to care.

Höchstbegabtenförderung und die gesellschaftliche Realität

Mit Uwe Rohr habe ich in der Vereinszeitschrift des österreichischen Hochintelligenzvereins, Ausgabe 372, gemeinsam einen Artikel zum Thema "Nachschulische Förderung von Höchstbegabten" veröffentlicht. Wie ich nun erstaunlicherweise festgestellt habe, hat Uwe im Alleingang noch einen weiteren Aufsatz zu exakt derselben Problematik gebracht, nämlich in Ausgabe 376. Das war die erste Ausgabe nach meinem Austritt aus dem Verein, deswegen habe ich sie nicht per Post zugestellt bekommen.

Uwe war der Überzeugung, dass es der Gesamtgesellschaft gut täte, wenn die Begabtesten gefördert würden und man es ihnen ermöglichte, verantwortungsvolle Positionen zu erlangen, in der sie Leistungen zum Wohle der Allgemeinheit erbringen könnten. Am Beispiel der Finanzkrise hätte man deutlich gesehen, dass die Ökonomen an den bundesdeutschen Universitäten eben nicht die klügsten Köpfe ihrer Zunft waren und es dadurch zu großem Schaden für das deutsche Volk kam.

Ferner glaubte Uwe, dass Hochbegabte aktiv benachteiligt würden und die Realität eben sei, dass Hochbegabte an Universitäten nicht allzu willkommen wären. Sie würden in erster Linie als Konkurrenten wahrgenommen werden, die den Etablierten ihre Posten streitig machen würden. Die Realität sehe also ganz anders aus, als Uwe sich das vorstellt.

Dazu ist zu sagen, dass ich selbst immer geglaubt habe, dass die Universität der ideale Arbeitgeber für die Begabtesten wäre. Im Laufe meines Studiums gewann ich jedoch den Eindruck, dass zumindest von Seiten der Hochschulmitarbeiter kein allzu großes Interesse bestand, begabte Studierende als Mitarbeiter zu bekommen und zu fördern. Eher noch schien es mir, als herrschte Günstlingswirtschaft - gefördert wurden nicht die Begabtesten, sondern diejenigen, die irgendwie ein Naheverhältnis zur Uni hatten oder besonders einflussreiche und mächtige Eltern. Ich habe einmal recherchiert, wer von meinen Studienkollegen nach dem Studium alles eine Anstellung an der Uni bekommen hatte, und die meisten waren solche, die sich während des Studiums im Cartellverband oder einer anderen Vorfeldorganisation der christdemokratischen Partei engagiert hatten. Jedesmal, wenn ich einem Hochschulmitarbeiter begegne, forsche ich zudem nach, an welcher Schule er maturiert hat. Erstaunlicherweise ist mir bisher dabei kein einziger Absolvent der Popperschule untergekommen. Die Popperschule ist ja eine Schule für Hochbegabte, und die Leistungen in Olympiade-Wettbewerben - egal ob Mathe, Informatik, Physik, Latein oder Philosophie - beweisen regelmäßig, dass diese Leute tatsächlich mehr können als nur Intelligenztestaufgaben zu lösen. Ich habe deswegen einmal einen ehemaligen Lehrer von mir kontaktiert, der jetzt an der Popperschule tätig sind. Er meinte lapidar, es komme eben auch auf "das Soziale" an - will heißen: aus welcher Familie man kommt, wen man zu seinen Bekannten zählt, welches Parteibuch man hat usw. Bei dieser Aussage fühlte ich mich schon ein wenig veräppelt.

Inzwischen habe ich mich schon daran gewöhnt, dass es in unserer Gesellschaft eben keine gesamtgesellschaftliche Solidarität (mehr) gibt, sondern für jeden nur die Interessen seiner selbst und seiner Familie zählen. Natürlich ist mir klar, dass dadurch viele Chancen vergeben werden. Aber den meisten ist das offenbar egal.

Dienstag, 14. November 2017

Social Democratic Morals and Politics

The morale in which I was educated could be described as social democratic: I felt obliged in my youth to achieve maximum performance for the benefit of the Republic. I only realized as an adult that life is a struggle and that one has to look after oneself first and foremost. In my naivety I considered the existence of the individual to be assured. But reality is another. If I now follow the "liberal" moral: "Everyone is a blacksmith of his or her own fortune", then I feel much freer, because I know that I only feel sorry for myself if I do not fulfil my duties properly. And this is more in keeping with reality.

Social democracy may have been at the helm again in Austria for about ten years (in Vienna without interruption for an even longer period of time), but in the meantime it seems to have lost much of its ideology. I have always perceived the party first and foremost as an instrument of power. In order to get good marks in school, it was necessary to always express oneself in a way that would be compatible with the social democratic world view. But the fact that social democracy would have actively defended the interests of the workers or fought against the excesses of financial capitalism was beyond my grasp.

My membership of the Liberals is also due to the fact that I never saw in social democracy the chance of advancing to a position of responsibility in which it would be possible to shape the world, and I also had the impression from my experiences at school that free thinking was undesirable in the Social Democratic Party and its environment. This unwelcome state of free thought, which is sometimes even actively suppressed, is the thing that bothers me most about Austria. It seems to me that we are a liberal democracy only in terms of the Constitution. But at least - at least the constitution grants certain rights, which some people have long forgotten.

Sozialdemokratische Moral und Politik

Die Moral, in der ich erzogen wurde, könnte man als sozialdemokratisch bezeichnen: Ich fühlte mich in meiner Jugend verpflichtet, zeitlebens maximale Leistung zu erbringen, zum Wohle der Republik. Dass das Leben ein Kampf ist und man in erster Linie auf sich selbst achten muss, habe ich erst als Erwachsener erkannt. In meiner Naivität hielt ich die Existenz des Einzelnen für gesichert. Die Realität ist aber eine andere. Folge ich nun der "liberalen" Moral: "Jeder ist seines Glückes Schmied", dann fühle ich mich wesentlich freier, denn ich weiß, dass ich nur mir selbst schade, wenn ich meine Pflichten nicht ordentlich erfülle. Und dies entspricht auch viel eher der tatsächlichen Realität.

Die Sozialdemokratie mag zwar seit etwa zehn Jahren in Österreich wieder am Ruder sein (in Wien sogar ohne Unterbrechung seit noch längerer Zeit), aber inzwischen scheint sie schon viel von ihrer Ideologie eingebüßt zu haben. Ich habe sie stets in erster Linie als ein Machtinstrument wahrgenommen. Um in der Schule gute Noten zu bekommen, war man gezwungen, sich immer in einem Sinne zu äußern, wie er mit der sozialdemokratischen Weltanschauung kompatibel sein könnte. Dass aber die Sozialdemokratie aktiv die Interessen der Arbeitnehmer vertreten oder die Auswüchse des Finanzkapitalismus bekämpft hätte, entzog sich meiner Wahrnehmung.

Meine Mitgliedschaft bei den Liberalen ist auch darauf zurückzuführen, dass ich in der Sozialdemokratie nie die Chance eines Aufstiegs in eine verantwortungsvolle Position sah, in der es möglich wäre gestalterisch tätig zu werden, und ich zudem durch meine Erfahrungen in der Schule den Eindruck gewonnen hatte, dass freies Denken in der sozialdemokratischen Partei und ihrem Umfeld nicht erwünscht sei. Dieses Nichterwünschtsein freien Denkens, das teilweise sogar aktiv unterdrückt wird, ist überhaupt diejenige Sache, die mich an Österreich am meisten wurmt. Nur der Verfassung nach sind wir eine liberale Demokratie, scheint es mir. Aber immerhin - wenigstens räumt die Verfassung gewisse Rechte ein, die manche schon längst vergessen glauben.

Freitag, 10. November 2017

How people deal with unwelcome high-flyers

1. Denial: One claims that these people are not highly gifted at all, but only "nerds", or that they had particularly ambitious parents, that they are only "hard-working memorizers", that they have had a lot of luck in life and so on.

2. Playing down: Finally, one admits: Okay, these people may be talented. But they are not as talented as they think they are. They are at best gifted to a low degree, at the standards of other gifted individuals at best average, etc.

3. Silence: If you realize that denial and playing down are of no use, because it is obvious that these people are really good, the final solution is to be silent about them. You don't mention the names of these people unless you have to - but then you don't mention the achievements of these people, the academic degrees they have earned and so on. One tries to ensure that these people are not perceived and do not receive the recognition they deserve.

This is an original knowledge of mine, which is based on my experience and does not in any way represent textbook knowledge. The list of these three strategies - denial, playing down, silence - originates from me.

Wie man mit unliebsamen Hochbegabten umgeht

1. Abstreiten: Man behauptet, diese Leute seien gar nicht hochbegabt, sondern lediglich "Streber", oder hätten besonders ehrgeizige Eltern gehabt, seien lediglich "fleißige Auswendiglerner", hätten im Leben viel Glück gehabt usw. Alles Dinge, die man sehr oft zu hören bekommt!

2. Kleinreden: Schließlich gibt man zu: Okay, diese Leute mögen begabt sein. Aber sie sind auch nicht so begabt, wie sie zu sein glauben. Sie sind allenfalls geringgradig hochbegabt, an den Maßstäben anderer Hochbegabter bestenfalls Durchschnitt usw.

3. Totschweigen: Hat man erkannt, dass Abstreiten und Kleinreden nichts nützen, weil es offensichtlich ist, dass diese Leute wirklich gut sind, kommt die finale Lösung: das Totschweigen. Man erwähnt die Namen dieser Leute nicht, außer wenn es sein muss - dann verschweigt man aber tunlichst die Leistungen, die diese Leute erbracht haben, sowie allfällige akademische Grade und dergleichen. Man versucht zu erreichen, dass diese Leute nicht wahrgenommen werden und ihnen die Anerkennung, die sie sich verdient hätten, nicht zuteil kommt.

Das ist im übrigen eine Original-Erkenntnis von mir, die auf meinen Erfahrungen beruht und keineswegs eine Wiedergabe von Lehrbuch-Wissen darstellt. Die Auflistung dieser drei Strategien - Abstreiten, Kleinreden, Totschweigen - stammt original von mir.

Freitag, 3. November 2017

The importance of epistemology

I stand by this: A well-off adult person who has never dealt with epistemology cannot be very intelligent. An intelligent man wants to understand the world, and in the course of his analyses he will inevitably ask himself the question, what he can know at all and what he has to believe.

How did the people in the so-called High Intelligence Association react when I once asked them a knowledge quiz about epistemology? Not only did they admit not being able to answer a single question (at least they were honest!), they also insulted me, and one person in her infinite stupidity even accused me of having done a quiz about medicine and thus disadvantaged all those who had not studied medicine (i. e. all except myself). That this person thought that questions related to epistemology were questions about medicine is as if I had asked questions about botany and she had thought they were questions about automotive engineering. The way of thinking "He asks questions that I don't understand - he studied medicine - apparently he has asked questions about medicine" is already a sign of enormous simplicity. As if I had no other interests besides medicine.

When I tell this story to people who have not only achieved a positive result in an intelligence test once in their life, but are actually intelligent, they do not get out of amazement how stupid some members of this association are.

Die Bedeutung der Erkenntnistheorie

Ich bleibe dabei: Ein gut situierter, erwachsener Mensch, der sich noch nie mit Erkenntnistheorie auseinandergesetzt hat, kann nicht besonders intelligent sein. Denn ein intelligenter Mensch will die Welt verstehen, und im Zuge seiner Analysen wird er sich unweigerlich die Frage stellen, was er überhaupt wissen kann und was er glauben muss.

Wie haben die Leute im so genannten Hochintelligenzverein reagiert, als ich ihnen einmal ein Wissensquiz über Erkenntnistheorie gestellt habe? Nicht nur, dass sie zugaben, keine einzige Frage beantworten zu können (wenigstens waren sie ehrlich!), beschimpften sie mich noch, und eine Person machte mir in ihrer unendlichen Blödheit sogar den Vorwurf, ich hätte ein Quiz über Medizin erstellt und damit alle, die nicht Medizin studiert haben (also alle außer mich selbst), benachteiligt. Dass diese Person Fragen, die mit Erkenntnistheorie zu tun hatten, für Fragen über Medizin hielt, ist so, wie wenn ich Fragen über Pflanzenkunde gestellt und sie geglaubt hätte, es wären Fragen über Kraftfahrzeugtechnik. Die Denkweise "Er stellt Fragen, die ich nicht verstehe - er hat Medizin studiert - offenbar hat er Fragen über Medizin gestellt" zeugt schon von enormer Einfalt. Als ob ich keine anderen Interessen außer Medizin (gehabt) hätte.

Wenn ich das Leuten erzähle, die nicht nur einmal in ihrem Leben ein positives Ergebnis in einem Intelligenztest erzielt haben, sondern tatsächlich intelligent sind, kommen sie aus dem Staunen nicht heraus, wie dämlich manche Mitglieder dieses Vereins sind.

Here is the evidence, now what is the hypothesis? The complementary roles of inductive and hypothesis-driven science in the post-genomic era

Kell et al. (2003): Here is the evidence, now what is the hypothesis? The complementary roles of inductive and hypothesis-driven science in the post-genomic era

This paper is about the scientific method and argues "that data- and technology-driven programmes are not alternatives to hypothesis-led studies in scientific knowledge discovery but are complementary and iterative partners with them".
Many fields are data-rich but hypothesis-poor. Here, computational methods of data analysis, which may be automated, provide the means of generating novel hypotheses, especially in the postgenomic era. [...] Our motivation, in part, is to understand the failure of the prevailing scientific practices to have predicted the existence of so many genes (many of them essential) that were uncovered by the systematic genome sequencing programs, and to rehearse the relative roles of inductive expression profiling methods, technology development and scientific hypothesis testing in post-genomic systems biology.
The authors write that classical genetics and classical genomics assumed that "the phenotype is caused by the genotype, not vice versa, although it is possible to infer the genotype from the phenotype". This hypothesis-driven approach failed to find "the approximately 40% of the genes that were uncovered, even in well-worked model organisms, after whole-genome sequencing methods were applied", and the authors think "that the main reason is that classical molecular genetics was both reductionist and qualitative".
At least two strategies for understanding complex systems can be envisaged. The reductionist view would have it that if we can break the system into its component parts and understand them and their interactions in vitro, then we can reconstruct the system physically or intellectually. This might be seen as a ‘bottom-up’ approach. The holistic approach takes the opposite view, that the complexities and interactions in the intact system mean that we must study the system as a whole. Although these ideas are far from new, such strategies are nowadays often referred to as ‘systems biology’. The molecular biology agenda was explicitly reductionist. The other chief attribute of the molecular biology of the last 50 years is that it was largely qualitative. The aim was to make statements that were either true or false.
The authors state clearly that much of biology is not hypothesis-driven science:
[E]ngineering strategies and (by extension) Systems Biology do not represent hypothesis-driven science. [...] [W]hat is meant by Professor Allen is that there is no specific hypothesis, as clearly one can always cast the hypothesis in terms of a view (‘hypothesis’) that generating such data from a specific set of samples will at least be of value. Thus, throughout, we use ‘hypothesis’ to mean a specific proposition about the behaviour of a (biological or other) system, based on a logical reasoning that leads to an experimentally verifiable prediction that is either confirmed to be consistent with it or otherwise. [...] [Epidemiology and] almost all kinds of data mining equivalently search for patterns, and generalise rules as inductive inferences from associations or patterns that occur regularly. Indeed, data mining is practically synonymous with ‘knowledge discovery’ in databases. To this extent, a significant part of the scientific discovery process involves establishing regularities of this type. [...] In biological chemistry, the development of methods for sequencing proteins and nucleic acids by Sanger or of the polymerase chain reaction by Mullis and of soft-ionisation mass spectrometric methods are three obvious examples [of hypothesis-free science that led to a Nobel Prize winning discovery]. [...] A recent UK initiative in ‘Basic Technology’ explicitly recognises that the results of the technology development that it is promoting are not hypothesis-driven, but that excellent hypothesis-driven science could result from it.
Further, the authors emphasize that "[b]efore the advent of reductionist molecular biology, biology was largely an observational science" and that "much of post-genomic biology is, in this sense, observational in character". They also write that "[m]odern biology rests on three major pillars - the Theory of Evolution by Natural Selection, Mendel’s Laws of Inheritance and the double helical structure of DNA" and examine "how these pillars were built and whether hypothetico-deductive or inductive reasoning was involved".

Regarding the future of biological science, the authors express the following opinion:
Intellectual activity, including that which produces patentable inventions and other outcomes commonly recognised as ‘intellectual property’, can be seen as the navigation of a complex search space or ‘landscape’ in search of ideas or material inventions that are, in a quasi-evolutionary sense, ‘better’ or ‘fitter’ than those pre-existing. The only hypotheses here, then, are that a knowledge of the landscape will help in guiding the search, and that there are tools which can improve the chances of getting to the top of Everest rather than being stuck on Snowdon.
Finally, they mention artificial intelligence projects such as DENDRAL and METADENDRAL which "sought explicitly to enquire as to whether scientific reasoning could be mechanised".

Systems Biology, Proteomics, and the Future of Health Care: Toward Predictive, Preventative, and Personalized Medicine

Weston et al. (2004): Systems Biology, Proteomics, and the Future of Health Care: Toward Predictive, Preventative, and Personalized Medicine

This paper is about "paradigm changes in health care" which are going to happen soon and will lead to a "predictive medicine":
We predict that a paradigm shift in medicine will take place within the next two decades replacing the current approach, which is predominantly reactive, to one that can increasingly predict and prevent cellular dysfunction and disease. Within the next 10-15 years, a predictive medicine will emerge, capable of determining a probabilistic, individualized future health history. [...] [P]redictive medicine will involve analyzing the individual genome for disease-susceptibilities and following pathogenic environmental exposures by multiparameter blood analyses.
These paradigm changes are primarily due to innovations in systems biology. Therefore, at the beginning of their paper, the authors provide a definition of systems biology:
Systems biology is the analysis of the relationships among the elements in a system in response to genetic or environmental perturbations, with the goal of understanding the system or the emergent properties of the system. [...] A biological system may encompass molecules, cells, organs, individuals, or even ecosystems. [...] One of the major challenges of systems biology is to determine the architecture of protein and gene regulatory networks and to understand how their behaviors are integrated to carry out biological functions. [...] [T]o do systems biology, as many levels of information as possible must be gathered and integrated. [...] In summary, systems biology is hypothesis-driven, in that systems approaches always begin with a model (descriptive, graphical or mathematical) and the model is tested with hypotheses that require systems perturbations and the gathering of dynamic global data sets. Different data types are integrated and compared against the model. At each turn of the hypothesis-driven process, the model is reformulated. This process is continued until the experimental data and the model are brought into juxtaposition.
What are the effects of systems biology upon clinical medicine going to be? The authors name two things:
First, systems biology will continually improve our capacity to understand and model biological systems on a more global and in-depth scale than ever before. [...] The second major impact of systems biology in medicine will be the continual spawning of new technologies, which will enhance the efficiency, scale and precision with which cellular measurements are made.
As examples of such new technologies, microfluidics and nanotechnology are mentioned.

The importance of systems biology for future healthcare is stressed with the following arguments:
[T]he behaviors of most biological systems, including those affected in cancer, cannot be attributed to a single molecule or pathway, rather they emerge as a result of interactions at multiple levels, and among many cellular components. [...] Understanding the design principles of biomodules and protein and gene regulatory networks during normal physiology and disease will lead to more rationalized and efficacious treatment strategies, as the actual nodal points or direct underlying causes of diseases will be pinpointed.

Two Examples

In the next two chapters, the application of systems biology to two model systems is outlined: galactose utilization in yeast and endomesoderm specification in the sea urchin.

Regarding galactose utilization in yeast, the authors write:
The systems biology approach has provided a wealth of new information even for the relatively simple system whereby yeast utilize galactose as a carbon sources - a system that has been intensely studied for decades and which represents one of the best-characterized systems of gene regulation. [...] Until recently, many have regarded galactose utilization as a simple regulatory network. [...] Further studies, however, have established additional regulatory roles for [various] events[.] [...] All of these events take place during galactose induction. Despite these additional insights, however, it was not until the galactose system was interrogated using a large-scale, systems biology approach, that the complexity of this system and its interconnections with other cellular functions became apparent.
Which insights has the systems biology study of galactose utilization brought to us?
The systems biology study of galactose utilization provided a number of new insights. First, this was the earliest study to report, on a global level, a poor correlation between changes in mRNA levels and changes in protein expression. This suggests that posttranscriptional regulatory mechanisms are important for changing patterns of protein expression. Second, it was demonstrated, unequivocally, that although the galactose pathway itself involves a well-characterized transcriptional network controlling the genes required for galactose utilization, the cellular response to galactose extends well beyond the activation of these genes.
In the course of these studies, various methods were used, such as "microarray expression analysis, genome-wide binding analysis, the use of search algorithms on a defined list of sequences, and comparative genomics". With the integration of all of these using "computational approaches", "accurate models of gene modules" have been generated "in which the targets of a transcription factor are defined, as are the ciselements to which these factors bind".

While "[t]he analysis of the galactose utilization system in yeast displays a systems approach to understanding a simple physiological response[,] [t]he studies carried out by Eric Davidson and colleagues, to understand endomesoderm specification in sea urchin larva, demonstrate the power of a systems approach to understanding developmental processes".
Davidson and co-workers have extensively analyzed the regulatory gene network underlying endomesodermal specification in sea urchin embryos. In one approach, they focused on the cis-regulatory system of the developmentally regulated endo 16 genes - a marker of endoderm cell fate specification. [...] In addition, Davidson and colleagues constructed a gene regulatory network for endomesodermal development in the larva.
What conclusions could be drawn?
First, there appear to be a variety of subcircuits similar to those found in engineering (feedforward, feed-backward, positive feed back loops, negative feedback loops, etc). [...] Second, the network is designed to move development forward inexorably, in keeping with the fact that development is, under most conditions, irreversible. Finally, a careful examination of the network suggests perturbations that may change fundamental emergent properties of the system. Indeed, one such perturbation has been carried out to generate a larva with two guts.
The significance of all of this is that "these model systems are providing fundamental new strategies for thinking about drug and drug target discovery".


Next, the authors talk about proteomics. Their motivation for doing so is as follows:
[P]roteins are the actual effectors driving cell behavior, and they cannot be studied simply by looking at the genes or mRNAs that encode them, thus warranting the establishment of a field, now termed proteomics, devoted entirely to their study.
Regarding the goal of proteomics research, they write:
The goal of proteomics research is to understand the expression and function of proteins on a global level. More than simply cataloguing the proteomesa quantitative assessment of the full complement of proteins within cells, the field of proteomics strives to characterize protein structure and function, protein-protein, protein-nucleic acid, protein-lipid, and enzyme-substrate interactions, post-translational modifications, protein processing and folding, protein activation, cellular and sub-cellular localization, protein turnover and synthesis rates, and even alternative isoforms caused by differential splicing and promoter usage. In addition, the ability to capture and compare all of this information between two cellular states is essential for understanding cellular responses.
Two approaches towards mass spectrometry for "global quantitative protein profiling" are introduced:
The more established and most widespread method uses high-resolution, two-dimensional electrophoresis (2DE) to separate proteins from two different samples in parallel, followed by staining and selection of differentially expressed proteins to be identified by mass spectrometry. [...] A second quantitative approach, which is gaining in popularity, uses stable isotope tags to differentially label proteins from two different complex mixtures. In this method, proteins within a complex mixture are first labeled isotopically then digested to yield labeled peptides. The two differentially labeled peptide mixtures are then combined, peptides separated by multidimensional liquid chromatography (LC) and analyzed by tandem mass spectrometry. Peptides are identified by automated database searches, and relative protein abundances are obtained from the mass spectra.
The authors further write that "[t]he identification of biomarkers is an area in which proteomics will undoubtedly have a significant impact - a prospect that has not gone unnoticed by the proteomics community".

There are two concerns related to biomarkers:
First, of the biomarkers routinely used to diagnose disease, most are capable of detecting the onset or advanced progression of disease, but have little, if any, predictive power. [...] The second concern with respect to the use of single molecule biomarkers is that it is based on the expectation that an increase in the concentration of a single protein can unambiguously specify diseasesa dangerous and unrealistic assumption. Diseases are characterized by heterogeneity between individuals; the same disease can be initiated by numerous factors and can cause a range of molecular changes.
For these reasons, the authors want to replace traditional biomarkers with multiparameter analyses, aided by systems biology:
Just as normal physiology and disease arise from protein and gene regulatory networks, normal and perturbed, and these require analyses of all the elements in the system, diagnostics will also require the analysis of multicomponents to reflect the true complexity of the disease process. Moreover, multiparameter analyses will be able to (1) predict the onset of disease, (2) stratify disease (e.g., prostate cancer is probably three or four diseases and not just a single one), (3) indicate the progression of the disease, (4) follow the course of treatment, and (5) make predictions about the effectiveness of a drug or adverse reactions, etc. By this view, multiparameter analyses of the serum or blood will provide a window into health and disease.
This will eventually lead to a new way of diagnosing diseases, by means of "serum proteome patterns":
The concept behind pattern diagnostics is that the blood plasma proteome reflects tissue and organ pathology, causing patterns of protein changes that have diagnostic potential without even knowing the identities of the individual proteins. Since MS-based approaches provide a pattern of peaks, the idea is that these patterns can discriminate certain diseases.
The authors write about a study that was intended to prove that this principle works:
In the first proof-of-principle study, a new computer-based artificial intelligence algorithm was used to identify patterns among a “training set” of mass spectral data[.] [...] The algorithm generated a proteomic pattern that was then used to identify ovarian cancer in individuals from a second independent group[.] [...] [T]his technique can be used alongside any number of other indicators such as genetic defects, or histopathological findings, to make more accurate diagnoses.
Again, there are some concerns about this technique:
Some researchers contest that [this technique] is not sensitive enough, and captures only high abundance proteins, and therefore is not suitable for measuring true cancer biomarkers. Of equal concern is the reproducibility of the technique. [...] In addition to these concerns, the concept of using a pattern of MS peaks to diagnose disease without knowing the identities of the proteins responsible for those peaks is a foreign one and a major point of contention for many researchers.
The authors conclude that we need "to obtain more data on this approach to evaluate its predictive power".

The next chapter deals with protein chips:
[T]he goal behind protein microarrays is to print thousands of protein-detecting features, for the interrogation of biological samples. An example is antibody arrays (also referred to as protein profiling arrays), in which a host of different antibodies (e.g., monoclonal, polyclonal, antibody fragments) are arrayed to detect their respective antigens from a sample of human blood. [...] [T]he implementation of protein arrays is a much greater challenge than DNA arrays for a number of reasons. Proteins are inherently much more difficult to work with than DNA, their solubility varies widely, they have a broad dynamic range, they are much less stable than DNA, and their structure can be difficult to preserve on a glass slide, but is essential for most assays (unlike DNA, in which only the sequence order needs to be maintained). Finally, there is no technique, analagous to PCR, that exists for amplifying proteins, and thus the starting material is much more of a limiting factor.
Then, reverse phase microarrays are described, which are "particularly useful for profiling the status of cellular signaling molecules, or post-translational modifications, among a cross-section of tissue that includes both normal and cancerous cells".
This method can track all kinds of molecular events and can compare diseased and healthy tissues within the same patient, enabling the development of individualized diagnosis and treatment strategies. The ability to acquire proteomic snapshots of neighboring cell populations, using multiplexed reverse phase microarrays in conjunction with LCM [laser capture microdissection], will have applications in a number of areas beyond the study of tumors. The approach can provide insights into normal physiology and pathology of all tissues, and will be invaluable for characterizing developmental processes and anomalies. It should be emphasized, however, that beyond reverse phase microarrays, the marriage of LCM with any refined proteomics platform offers great promise for extracting information from pure cell populations, in turn decreasing some of the limitations imposed by tissue heterogeneity.
Next, the authors write about emerging trends in proteomics, and among other things, they state:
Advances in quantitative proteomics would clearly enable more in-depth analyses of cellular systems. However, for many cellular events, protein concentrations likely do not change significantly, rather their function is modulated by posttranslational modifications (PTMs). Over 400 PTMs have been described, many with important influences on cell function. Methods of monitoring PTMs are sorely needed in proteomics, but to date, this remains an underdeveloped area.
According to the authors, one of the main goals in proteomics is "characterizing the human plasma proteome" because the blood should "contain information on the physiological state of all tissues in the body". The authors discuss some of the obstacles in this endeavour.

Further, they write about micro- and nanoscale technology:
Developing any technology intended for clinical use will require the minaturation, integration and automation of the procedures for sample analyses. This in turn will lead to more sensitive and cost-effective analyses. [...] Biological systems are made up of individual molecules operating on a nanoscale, whereas current tools used in medicine are much larger and thus inadequate for fully characterizing cellular function at the molecular level.
As examples of these technologies, quantum dots, microfluidics, microcantilevers and nanowire sensors are provided. The authors conclude this chapter with the following words:
With these devices, one can eventually imagine analyzing 100s, 1000s, or even 10,000s blood elements. In addition, we predict that individuals will have their genomes sequenced relatively inexpensively within the next l0-15 years, making it possible to provide each individual with a probabilistic future health history. Thus, the predictive medicine will assess the digital information of the genome and the pathological cues of the environment. Another area for which nanotechnology has an application is that of drug delivery systems. It is conceivable that in the future, drugs will be delivered to specific targets in the body via biodegradable devices. Implantable biosensors can also be foreseen, which can monitor sugar levels in the cells of diabetics, and release insulin as needed, resulting in much more precise control of blood sugar levels than is currently attainable in diabetics. Finally, an exciting possibility is the use of microrobots and probes, which can target and destroy tumors.
Then comes a chapter on bioinformatics related to proteomics, in which the database search programs SEAQUEST, MASCOT, ProFound, MS-Tag, Sonar and PeptideProphet are mentioned.
These programs are used determine the amino acid sequence and thus the protein(s) corresponding to a given mass spectrum, but in many cases they generate a large number of incorrect assignments. Improvements to the current capabilities for tandem-MS identification are continually being developed.
What is also needed, according to the authors, is a unified database format.
Fortunately, the Human Proteome Organization (HUPO), which was formed to coordinate worldwide proteomic efforts, has taken on this challenge and through the Proteomic Standards Initiative (PSI) group, established in 2002, is developing a common data standard which will enable users to retrieve data from different sites and perform comparative analyses of different data sets. [...] In a similar manner, a standard format has been adapted by the Microarray Gene Expression Data Society (MGEDS) for depositing microarray expression data. As a result, MAGE-ML (MicroArray Gene Expression Markup Language) was designed to describe and communicate information about microarray experiments, incorporating the principles outlined by an earlier standard, MIAME (Minimum Information About a Microarray Experiment). [...] Parenthetically, we are attempting to develop a database at the Institute for Systems Biology, (Systems Biology Expression and Management system, or SBEAMs), that will be able to acquire all relevant types of global data sets (DNA, RNA, proteins, interactions, phenotypic data, etc.) and begin to do the integrations that are an essential part of systems biology.
The next step is computational integration:
The goal of cataloguing all of the cellular elements under various conditions and in various organisms is well underway, and becoming increasingly possible as global technologies mature. The next phase is to understand how these elements are coordinated to form functional biological systems. Systems-level integration of data is still in its infancy, but a number of new concepts have emerged. [...] The second benefit of data integration is that it serves to reveal new biological phenomena, which would not be readily apparent from any single analysis. [...] The ultimate goal is to characterize the information flow through protein networks that interconnect the extracellular microenvironment with the control specified by gene regulatory networks which, in turn, active the peripherial batteries of genes to execute the effector functions of development and physiological responses. To successfully understand the interfacing of these protein and gene regulatory networks will require, ultimately, the integrations of many of the different data types arising from DNA, RNA, protein, metabolites, small molecules, and many different aspects of phenotype.


Finally, the authors summarize the benefits systems biology will bring to the future of medicine.
In conclusion, the emerging fields of systems biology and proteomics offer exciting and promising advances toward predictive, preventative, and personalized medicine. [...] Understanding protein and gene regulatory networks of biological systems will improve drug development efforts and eventually will lead to preventive drugs. [...] Proteomics will play a major role both in developing better multiparameter diagnostics and in the search for new therapeutic targets. [...] Integrating different types of biological information will be critical both for understanding biological systems and for accurately diagnosing and monitoring disease.