Marr’s levels of analysis and embodied approaches

Marr described the brain as an information-processing system, and argued it had to be understood at three distinct conceptual levels:

1) The computational level: what does the system do? (for example: estimating the location of a sound source)

2) The algorithmic/representational level: how does it do it? (for example: by calculating the maximum of cross-correlation between the two monaural signals)

3) The physical level: how is it physically realized? (for example: with axonal delay lines and coincidence detectors)

This is what Francisco Varela describes as “computational objectivism”. That is, the purpose of the computation is to extract information about the world, in an externally defined representation. For example, to extract the interaural time difference between the two monaural sounds. Varela describes the opposite view as “neurophysiological subjectivism”, according to which perception is a result of neural network dynamics. Neurophysiological subjectivism is problematic because it fails to fully recognize the defining property of living beings, which is teleonomy. Jacques Monod (who got the Nobel prize for his work in molecular biology) articulated this idea by explaining that living beings, by the mechanics of evolution, differ from non-living things (say, a mountain) by the fact that they have a fundamental teleonomic project, which is “invariant reproduction” (in Hasard et Nécessité). The achievement of this project relies on specific molecular mechanisms, but it would be a mistake to think that the achievement of the project is the consequence of these mechanisms. Rather, the existence of mechanisms consistent with the project is a consequence of evolutionary pressure selecting these mechanisms: the project defines the mechanisms rather than the other way round. This is a fundamental aspect of life that is downplayed in neurophysiological subjectivism.

Thus computational objectivism improves on neurophysiological subjectivism by acknowledging the teleonomic nature of living beings. However, a critical problem is that the goal (first level) is defined in terms that are external to the organism. In other words, a critical issue is whether the three levels are independent. For example, in sound localization, a typical engineering approach is to calculate the interaural time differences as a function of sound direction, then calculate these differences by cross-correlation and invert the mapping. This approach fails in practice because in fact, these binaural cues depend on the shape of the head (and other aspects), which varies across individuals. One would then have to specify a mapping that is specific to each individual, and it is not reasonable to think that this might be hard-coded in the brain. This simply means that the algorithmic level (#2) must in fact be defined in relationship with the embodiment, which is part of level #3. This is in line with Gibson’s ecological approach, in which information about the world is obtained by detecting sensory invariants, a notion that depends on the embodiment. Essentially, this is the idea of the “synchrony receptive field” that I developed in a recent general paper (Brette, PLoS CB 2012), and before that in the context of sound localization (Goodman and Brette, PLoS CB 2010).

However, this still leaves the computational level (#1) defined in external terms, although the algorithmic level (#2) is defined in more ecological terms (sound location rather than ITD). The sensorimotor approach (and related approaches) closes the loop by proposing that the computational goal is to predict the effect of movements on sensory inputs. This implies the development of an internal representation of space, but space is a consequence of this goal, rather than an ad hoc assumption about the external world.

Thus I propose a redefinition of the three levels of analysis of a perceptual system that is more in line with embodied approaches:

1) Computational level: to predict the sensory consequences of actions (sensorimotor approach) or to identify the laws that govern sensory and sensorimotor signals (ecological approach). Embodiment (previously in level 3) is taken into account in this definition.

2) Algorithmic/representational level: how to identify these laws or predict future sensory inputs? (the kernel in the kernel-envelope theory in robotics)

3) Neurophysiological level (previously physical level): how are these principles implemented by neurons?

Here I am also postulating that these three levels are largely independent, but the computational level is now defined in relationship with the embodiment. Note: I am not postulating independence as a hypothesis about perception, but rather as a methodological choice.

Update. In a later post about rate vs. timing, I refine this idea by noting that, in a spike-based theory, levels 2 and 3 are in fact not independent, since algorithms are defined at the spike level.

 


Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *