Approaches that endorse the causal nature
of representation have difficulties in explaining the selectivity that any
living organism shows in its behaviour. Not every represented feature in the
environment is relevant. The organism’s (construed) goal seems to affect the
ways by it handles incoming information. Worse still, such information seems to
be interpreted in relation to the goal and the organism’s experiences of the
conditions for attaining it. This is a long known problem. In cognitive science
philosophy it is discussed as the relevance problem. Richard Samuels has presented a recent
summary of the issue and the various attempts at solving it.
Reasoning seems to be the pivotal cognitive
process that transforms incoming information into action plans. In order to
provide a naturalistic account of cognition, this mental activity should be
modelled in terms of some algorithmic procedures. As Fodor suspected in “The
mind doesn’t work that way”, and Samuels concludes in 2010, neither the
classical computational theory of cognition nor the connectionist approaches
have succeeded. He enlists various reasons, but to me the most interesting
concerns the ways representations are understood.
Fodor opted for symbolic representations
(as the vehicles for the language of thought) very early on, because the
resemblance theory of representations
was completely inadequate to account for “propositional attitudes”, like
beliefs or wishes about some state of affair. This conception generated the “functional
role semantics” with all sorts of paradoxes and Fodor is both an honest man and
enough sharp-eyed to admit the difficulties, which many of his followers (Like
Susan Schneider and Richard Samuels) have not wanted to do.
There is a marvellous debate between Steven
Pinker and Jerry Fodor in 2005 about how to understand the workings of the
mind. Fodor’s “The mind doesn’t work that way” was a critique of the classical
computational approach, stimulated by Pinker’s book “How the mind works”
(1997). It took Pinker five years to defend his evolutionary account of
cognitive processes in the paper published in Mind and Language. You can
retrieve it at Pinker’s home page. Fodor’s response in the same issue is, again,
most exhilarating but it is sadly locked up in Wiley’s archives from which you cannot get it for free unless you have an institutional access.
First, Fodor shows how the syntactic and “local”
meaning of symbolic representations inevitably cause problems. He also shows
how cognitive science tries to account for these by the massive modularity
models and by calling evolutionary accounts to help (as Pinker does). Fodor's conclusion
is very straightforward: “So how does
the mind work? I don’t
know. You don’t know. Pinker doesn’t know. And, I rather suspect, such is the
current state of the art, that if God were to tell us, we wouldn’t understand him.”
Samuels admits five years later that Fodor’s
view of the state of art in cognitive science is accurate. However, he does not
want to explain it as the inherent (logical) flaws in the paradigm as Fodor
did. He regards it mainly as an empirically solvable problem once the
methodological difficulties in studying reasoning have been cleared away. Why
is Samuels reluctant to admit the logical problem? He provides the answer, partly in
a footnote. I am astonished how often important insights are buried in
footnotes. Perhaps they are too dangerous to be inserted in the main line of
argument. Here it is: “...if one seeks
to provide a substantive general explanation of failures in cognitive
science [to explain the problem of
relevance], then the most obvious option is to reject the mechanistic
assumption. [The
footnote: this is the position that
Descartes (1637) famously adopts in the Discourse on method; and for
surprisingly similar reasons. Roughly, he thinks that it is impossible to
provide a mechanistic account of reason].
If
our only alternative to computational, connectionist, dynamic systems etc. models of
cognitive processes is to revert back to Cartesian dualism, it is obvious that
we must go on trying to fit these models to the mind and hope that
improved ways of doing so will eventually get us out of the danger. I do not think we have to renounce a monistic account
of the mind. We only have to renounce the representationist accounts of the “tokens
in the brain”.