I think I have now reached the point at
which I can leave behind Jerry Fodor’s long-standing attempt at understanding
the causal chain from representations to their symbolic expression. When I
sensed the Aristotelian smell in the way Fodor wrote about perceptual modules,
I was not yet aware that it was just an offspring of his grand project of
overcoming the Aristotelian rift between incoming information and its transformation
into language. It is really impressive regarding its scope, its time span, and
volume. And yet, in the end, it seems to fail. The source of the failure lurks
in the very beginning, in the conception of “Language of thought” , which Fodor
launched in a book (1975) with this title.
If we start by assuming that mental
representations are language-like, we are free to look at the complex
permutations, combinations and transformations that thinking seems to
accomplish with such “symbolic representations”. But how does this coincide
with the causal theory of mental representations?
The most succinct formulation I have found
by Fodor on this issue is from an exhilarating paper,
jointly written with LePore in 1993, that shows the inconsistencies of
inferential role semantics (endorsed by post-structuralists and constructionists
of the time).
“..what makes 'dog' mean dog is
some sort of symbol-world connection; perhaps some sort of causal or
informational or nomological connection between tokens of the expression and
tokens of the animal.”
There are two
important things to notice here. First, two sets of “tokens” [presumably in the
brain] are postulated. One for symbols and the other for the object to which
the symbol refers. The second thing
concerns the nature of the relationship between the two sets. I have got
the impression that Fodor emphasised causal
nature of the object and the token [object representation?] earlier in his
writings. This is not addressed here. Instead, the relationship between the two
token sets are characterised by three possible alternatives. So, symbol tokens
inherit their meaning content from the [causally] tokened object
representations. Such mental symbols can combined, permuted, etc., in the
language of thought and, eventually, also be expressed in natural languages.
But how is this
“causal or informational or nomological” relationship implemented (in the
brain, presumably)? How is the first set of tokens transformed into the second
set? Do we not have here exactly the problem of how meaning is attached to
representations, that Daniel Stern’s homemade attempt in Motherhood
Constellation addressed in a small scale? Fodor’s edifice is obviously much
more extensive but nevertheless it runs into the same problems.
It interesting to notice that Fodor’s work
is refuted from two sides. Both the representatives of semantic theories, like
Jylkkä (2011), and strict cognitive science philosophers, as Cummins, show that
his functionalist account entails forbidden amalgamations. On one hand, you
cannot do away the inferential nature of symbols, concepts, and their verbal
expressions. On the other hand, by assuming “symbolic representations” you
destroy the idea of representation as it is used in cognitive science by
overextending its meaning. It seems that both camps are justified in their
arguments.
In order to accomplish a unitary chain from
perception to expression, mediated by a central process, Fodor had to invent a
number of bridging concepts. I felt awkward when encountering the terms
“symbolic representation”, “language of thought” or LOT, and “tokens”. Reading
commentaries and reviews did not help much. Consider the following summary of
LOT:
“…cognitive
processes are the causal sequencing of tokenings of symbols in the brain… LOT
also claims that mental representations have a combinatorial syntax. A
representational system has a combinatorial syntax just in case it employs a
finite store of atomic representations that may be combined to form compound
representations… Relatedly, LOT holds that symbols have a compositional
semantics—the meaning of compound representations is a function of the meaning
of the atomic symbols, together with the grammar. ” (Schneider & Katz,
2011).
Will I ever get out of such a conceptual
mess? Or should I just leave it behind to rest in the peace of countless
journal articles, hidden in the archives of publishing houses that charge 30 $
for an access.
It seems that different strands of
cognitive science philosophy work with quite different conceptions of
representation. The computational approaches to sense perception (illustrated
by David Marr’s early research) understand representations as an end product of
some basic algorithmic procedures on the sense data, like retinal images. Symbolic representations, in turn, are
products of algorithmic procedures with “atomic symbols” - if I can make any sense of the above
quotation.
But the same problem that can be recognised in Fodor & LePore’s formulation is present. How do we get from the representations that are constructed (computed) out of sense data to the symbolic representations (“atomic symbols”)? We cannot, as Cummins and Roth argue in a recent article. There is a rift between the causal sequences that operate on “incoming forms" and the “symbol manipulating operations” performed by the “central processor”. Perceptual representations are not symbolic. In my next post, I will summarise this line of arguments. Then I shall explore a bit closer what Schneider and Katz mean by “atomic symbols”.