What is computational neuroscience? (XI) Reductionism

Computational neuroscience is a field that seeks a mechanistic understanding of cognition. It has the ambition to explain how cognition arises from the interaction of neurons, to the point that if the rules that govern the brain are understood in sufficient detail, it should be in principle possible to simulate them on a computer. Therefore, the field of computational neuroscience is intrinsically reductionist: it is assumed that the whole (how the brain works) can be reduced to final elements that compose it.

To be more precise, this view refers to ontological reductionism. A non ontologically reductionist view would be for example vitalism, the idea that life is due to the existence of a vital force, without which any given set of molecules would not live. A similar view is that the mind comes from a non-material soul, which is not scientifically accessible, or at least not describable in terms of the interaction of material elements. One could also imagine that the mind arises from matter, but that there is no final intelligible element – e.g. neurons are as complex as the whole mind, and smaller elements are not more intelligible.
In modern science in general and in neuroscience in particular, ontological reductionism is fairly consensual. Computational neuroscience relies on this assumption. This is why criticisms of reductionism are sometimes wrongly perceived as if they were criticisms of the entire scientific enterprise. This perception is wrong because criticisms of reductionism are generally not about ontological reductionism but about other forms of reductionism, which are more questionable and controversial.

Methodological reductionism is the idea that the right way, or the only way, to understand the whole is to understand the elements that compose it. It is then assumed that the understanding of the whole (e.g. function) derives from this atomistic knowledge. For example, one would consider that the problem of memory is best addressed by understanding the mechanics of synaptic plasticity – e.g. how the activity of neurons changes the synapses between them. In genetics, one may consider that memory is best addressed by understanding which genes are responsible for memory, and how they control the production of proteins involved in the process. This is an assumption that is less consensual, in computational neuroscience or in science in general, including in physics. Historically, it is certainly not true that scientific enquiry in physics started from understanding microscopic laws before macroscopic laws. Classical mechanics came before quantum mechanics. In addition, macroscopic principles (such as thermodynamics and energy in general) and symmetry principles are also widely used in physics in place of microscopic laws (for example, to understand why soap makes spherical bubbles). However, this is a relatively weak criticism, as it can be conceived that macroscopic principles are derived from microscopic laws, even if this does not reflect the history of physics.

In life sciences, there are specific reasons to criticize methodological reductionism. The most common criticism in computational neuroscience is that, while function derives from the interaction of neurons, it can also be said that the way neurons interact together is indirectly determined by function, since living organisms are adapted to their environment through evolution. Therefore, unlike objects of physics, living beings are characterized by a circular rather than causal relationship between microscopic and macroscopic laws. This view underlies “principle-based” or “top-down” approaches in computational neuroscience. Note that this is a criticism of methodological reductionism, but not of ontological reductionism.

There is also a deeper criticism of methodological reductionism, following the theme of circularity. It stems from the view that the organization of life is circular. It has been developed by Humberto Maturana and Francisco Varela under the name “autopoiesis”, and by Robert Rosen under the name “M-R systems” (M for metabolism and R for repair). What defines an entity as living, before the fact that it may be able to reproduce itself, is the fact that it is able to live. It is such an obvious truth about life that it is easy to forget, but to maintain its existence as an energy-consuming organism is not trivial at all. Therefore, a living entity is viewed as a set of physical processes in interaction with the environment that are organized in such a way that they maintain their own existence. It follows that, while a part of a rock is a smaller rock, a part of a living being is generally not a living being. Each component of the living entity exists in relationship with the organization that defines the entity as living. For this reason, the organism cannot be fully understood by examining each element of its structure in isolation. This is so because the relationship between structure and organization is not causal but circular, while methodological reductionism assumes a causal relationship between the elements of structure and higher-order constructs (“function”). This criticism is deep, because it does not only claim that the whole cannot be understood by only looking at the parts, but also that the parts themselves cannot be fully understood without understanding the whole. That is, to understand what a neuron does, one must understand in what way it contributes to the organization of the brain (or more generally of the living entity).

Finally, there is another type of criticism of reductionism that has been formulated against attempts to simulate the brain. The criticism is that, even if we did manage to successfully simulate the entire brain, this would not imply that we would understand it. In other words, to reproduce is not to understand. Indeed we can clone an animal, and this fact alone does not give us a deep understanding of the biology of that animal. It could be opposed that the cloned animal is never exactly the same animal, but certainly the same could be said about the simulated brain. But tenants of the view that simulating a brain would necessarily imply understanding the brain may rather mean that such a simulation requires a detailed knowledge of the entire structure of the brain (ionic channels in neurons, connections between neurons, etc) and that by having this detailed knowledge about everything that is in the brain, we would necessarily understand the brain. This form of reductionism is called epistemic reductionism. It is somehow the reciprocal of ontological reductionism. According to ontological reductionism, if you claim to have a full mechanistic understanding of the brain, then you should be able to simulate it (providing adequate resources). Epistemic reductionism claims that this is not only a necessary condition but also a sufficient condition: if you are able to simulate the brain, then you fully understand it. This is a much stronger form of reductionism.

Criticisms of reductionism can be summarized by their answers to the question: “Can we (in principle, one day) simulate the brain?”. Critics of ontological reductionism would answer negatively, arguing that there is something critical (e.g., the soul) that cannot be simulated. Critics of epistemic reductionism would answer: yes, but this would not necessarily help us understanding the brain. Critics of methodological reductionism would answer: yes, and it would probably require a global understanding of the brain, but it could only be achieved by examining the organism as a system with an organization rather than as a set of independent elements in interaction.

2 réflexions au sujet de « What is computational neuroscience? (XI) Reductionism »

  1. Hi,

    although I don't want to defend reductionism per se...
    don't you think that critics of methodological reductionism are somehow void?
    I would reformulate the criticism like that (please correct me if I'm wrong): you can't understand a phenomenon just looking at its elements (at whatever scale defined) but you also have to take into account all the relationships between them.
    This criticism doesn't really apply to science in general, neither to computational neuroscience, since what a model of a phenomenon does, is exactly defining a level of description with its elements and taking into account the relationships between them; then the compound of elements plus relationships produces the explanandum. As an example we can think of an attractor model of working memory: the explanandum is the working memory function of humans or animals (with whatever feature one want to include: number of items that can be stored, forgetting time, etc.), the model presents neurons as its basic elements but none of them exhibit working memory capacity; instead the relationships between them (i.e. the recurrent connections leading to sustained activity in absence of a stimulus) produce the explanandum.
    This is a type of scientific explanation that is not subject to the criticism above described (if my understanding of the criticism is accepted).
    What do you think about it?
    Best wishes

    Andrea

  2. Ping : What does Gödel’s theorem mean ? | Romain Brette

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *