Friday 21 December 2012

The Aristotelian shipwreck



The Aristotelian shipwreck, buried in Kantian sand dunes, reappears unexpectedly here and there. Some weeks ago, Harry Procter sent me Mats Bergman's paper “Representationism and Presentationism (2007), which is a valuable account of Peirce’s struggle to unite his semiotic understanding of the consciousness with the doctrine of immediate perception. The article, again, shows how useful is a careful  developmental approach that compares and contrasts Peirce’s early and mature views of perception.

In my reading of the article, Peirce did not manage to reach his aim for two reasons. His early theoretical maxim that everything in the consciousness is semiotic presupposes a unified conception of sign that unites the elementary modes of perception with the “higher” processes in consciousness, such as remembering, imagining, and thinking. Peirce never reached such a conception, although he experimented with different versions of the triadic conception of object, representamen, and interpretant.  His later solution of categorising signs according to the ontological categories of firstness, secondness, and thirdness blurred the picture and created problems of analysing consciousness in terms of the relationships between iconic, indexical, and symbolic signs.

The other problem that I sensed when reading Bergman concerns Peirce’s way of approaching perception from the traditional Aristotelian angle.  The following citation that Bergman quotes  in full illustrates it very well:

. . .the object of a sign, that to which it, virtually at least, professes to be applicable, can itself be only a sign. For example, the object of an ordinary proposition is [a] generalization from a group of perceptual facts. It represents those facts. These perceptual facts are themselves abstract representatives, though we know not precisely what intermediaries, of the percepts themselves; and these are themselves viewed, and are,—if the judgment has any truth,—representations, primarily of impressions of sense, ultimately of a dark underlying something, which cannot be specified without its manifesting itself as a sign of something below. There is, we think, and reasonably think, a limit to this, an ultimate reality like a zero of temperature. But in the nature of things, it can only be approached, it can only be represented. The immediate object which any sign seeks to represent is itself a sign. (MS 599:36–37 [c. 1902]; cf. NEM 4:309–310 [c. 1894?])

The receptive understanding of perception that Aristotle cemented seems here to be modified by a Kantian cautiousness that prevents Peirce to name “the thing in itself” as the starting point. Instead, he refers to “a dark underlying something”. Nevertheless, the following steps seem to come close to the Medieval debates on abstractive cognition, or Roger Bacon’s doctrine of De multiplication specierum. The path from “impressions of sense”, “percepts”, and “perceptual facts” to  “ordinary propositions”  is a faithful reproduction of the Aristotelian theory of cognition.

The inherent dynamics of this account brings about problems that lead to “a multiplication” of the steps that seem to be needed. Bergman’s paper illustrates this nicely by Peirce’s conceptual division between “percept” and “percipuum”. As instances of secondness, percepts only convey the “brute force” of their instigators. Percipuums are a bit like Fodor’s symbolic representations. Still within the perceptual domain, they involve a kind of identifying judgments that make them appearances of something.But the moment we fix our minds upon it and think the least thing about the percept, it is the perceptual judgment that tells us what we so ‘perceive’.” (CP 7.643 [c. 1903])

The recent commentaries that try to bring some coherence in Peirce’s prolific and contradictory thought seem, unwittingly, to reproduce the tendency of introducing more steps and substages in order to cope with the Aristotelian fallacy. Bergman  presents Carl Hausman’s suggestion to distinguish between percept1 and percept2 . The former denotes the dynamic object, or the immediate impact of the brute force, while the second incorporates a cognitive generalization that prepares the percept suitable for becoming the immediate object of thought, or the percipuum.

At this stage, I cannot help concluding that Peirce was, as we still mostly are, too closely tied to the Aristotelian understanding of cognition in order to unite his semiotic maxim of the consciousness with his accounts of perception. Jerry Fodor’s much more recent attempt to fill the rift between propositional attitudes and immediate sense impressions is another reproduction of the ancient problem that remains unsolved despite terminological modifications.

Sunday 28 October 2012

Cognitive models of reasoning

Approaches that endorse the causal nature of representation have difficulties in explaining the selectivity that any living organism shows in its behaviour.  Not every represented feature in the environment is relevant. The organism’s (construed) goal seems to affect the ways by it handles incoming information. Worse still, such information seems to be interpreted in relation to the goal and the organism’s experiences of the conditions for attaining it. This is a long known problem. In cognitive science philosophy it is discussed as the relevance problem.  Richard Samuels has presented a recent summary of the issue and the various attempts at solving it.

Reasoning seems to be the pivotal cognitive process that transforms incoming information into action plans. In order to provide a naturalistic account of cognition, this mental activity should be modelled in terms of some algorithmic procedures. As Fodor suspected in “The mind doesn’t work that way”, and Samuels concludes in 2010, neither the classical computational theory of cognition nor the connectionist approaches have succeeded. He enlists various reasons, but to me the most interesting concerns the ways representations are understood.

Fodor opted for symbolic representations (as the vehicles for the language of thought) very early on, because the resemblance theory of representations  was completely inadequate to account for “propositional attitudes”, like beliefs or wishes about some state of affair. This conception generated the “functional role semantics” with all sorts of paradoxes and Fodor is both an honest man and enough sharp-eyed to admit the difficulties, which many of his followers (Like Susan Schneider and Richard Samuels) have not wanted to do. 

There is a marvellous debate between Steven Pinker and Jerry Fodor in 2005 about how to understand the workings of the mind. Fodor’s “The mind doesn’t work that way” was a critique of the classical computational approach, stimulated by Pinker’s book “How the mind works” (1997). It took Pinker five years to defend his evolutionary account of cognitive processes in the paper published in Mind and Language. You can retrieve it at Pinker’s home page. Fodor’s response in the same issue is, again, most exhilarating but it is sadly locked up in Wiley’s archives from which you cannot get it for free unless you have an institutional access. 

First, Fodor shows how the syntactic and “local” meaning of symbolic representations inevitably cause problems. He also shows how cognitive science tries to account for these by the massive modularity models and by calling evolutionary accounts to help (as Pinker does). Fodor's conclusion is very straightforward: “So how does the mind work? I don’t know. You don’t know. Pinker doesn’t know. And, I rather suspect, such is the current state of the art, that if God were to tell us, we wouldn’t understand him.”

Samuels admits five years later that Fodor’s view of the state of art in cognitive science is accurate. However, he does not want to explain it as the inherent (logical) flaws in the paradigm as Fodor did. He regards it mainly as an empirically solvable problem once the methodological difficulties in studying reasoning have been cleared away. Why is Samuels reluctant to admit the logical problem? He provides the answer, partly in a footnote. I am astonished how often important insights are buried in footnotes. Perhaps they are too dangerous to be inserted in the main line of argument. Here it is: “...if one seeks to provide a substantive general explanation of failures in cognitive science [to explain the problem of relevance], then the most obvious option is to reject the mechanistic assumption. [The footnote:  this is the position that Descartes (1637) famously adopts in the Discourse on method; and for surprisingly similar reasons. Roughly, he thinks that it is impossible to provide a mechanistic account of reason]. 

If our only alternative to computational,  connectionist, dynamic systems etc. models of cognitive processes is to revert back to Cartesian dualism, it is obvious that we must go on trying to fit these models to the mind and hope that improved ways of doing so will eventually get us out of the danger. I do not think we have to renounce a monistic account of the mind. We only have to renounce the representationist accounts of the “tokens in the brain”.

Friday 19 October 2012

Farewell to Fodor?



I think I have now reached the point at which I can leave behind Jerry Fodor’s long-standing attempt at understanding the causal chain from representations to their symbolic expression. When I sensed the Aristotelian smell in the way Fodor wrote about perceptual modules, I was not yet aware that it was just an offspring of his grand project of overcoming the Aristotelian rift between incoming information and its transformation into language. It is really impressive regarding its scope, its time span, and volume. And yet, in the end, it seems to fail. The source of the failure lurks in the very beginning, in the conception of “Language of thought” , which Fodor launched in a book (1975) with this title. 

If we start by assuming that mental representations are language-like, we are free to look at the complex permutations, combinations and transformations that thinking seems to accomplish with such “symbolic representations”. But how does this coincide with the causal theory of mental representations?  
The most succinct formulation I have found by Fodor on this issue is from an exhilarating paper, jointly written with LePore in 1993, that shows the inconsistencies of inferential role semantics (endorsed by post-structuralists and constructionists of the time).

 “..what makes 'dog' mean dog is some sort of symbol-world connection; perhaps some sort of causal or informational or nomological connection between tokens of the expression and tokens of the animal.”

There are two important things to notice here. First, two sets of “tokens” [presumably in the brain] are postulated. One for symbols and the other for the object to which the symbol refers. The second thing  concerns the nature of the relationship between the two sets. I have got the impression that Fodor emphasised causal nature of the object and the token [object representation?] earlier in his writings. This is not addressed here. Instead, the relationship between the two token sets are characterised by three possible alternatives. So, symbol tokens inherit their meaning content from the [causally] tokened object representations. Such mental symbols can combined, permuted, etc., in the language of thought and, eventually, also be expressed in natural languages.

But how is this “causal or informational or nomological” relationship implemented (in the brain, presumably)? How is the first set of tokens transformed into the second set? Do we not have here exactly the problem of how meaning is attached to representations, that Daniel Stern’s homemade attempt in Motherhood Constellation addressed in a small scale? Fodor’s edifice is obviously much more extensive but nevertheless it runs into the same problems.

It interesting to notice that Fodor’s work is refuted from two sides. Both the representatives of semantic theories, like Jylkkä (2011), and strict cognitive science philosophers, as Cummins, show that his functionalist account entails forbidden amalgamations. On one hand, you cannot do away the inferential nature of symbols, concepts, and their verbal expressions. On the other hand, by assuming “symbolic representations” you destroy the idea of representation as it is used in cognitive science by overextending its meaning. It seems that both camps are justified in their arguments.

In order to accomplish a unitary chain from perception to expression, mediated by a central process, Fodor had to invent a number of bridging concepts. I felt awkward when encountering the terms “symbolic representation”, “language of thought” or LOT, and “tokens”. Reading commentaries and reviews did not help much. Consider the following summary of LOT:

“…cognitive processes are the causal sequencing of tokenings of symbols in the brain… LOT also claims that mental representations have a combinatorial syntax. A representational system has a combinatorial syntax just in case it employs a finite store of atomic representations that may be combined to form compound representations… Relatedly, LOT holds that symbols have a compositional semantics—the meaning of compound representations is a function of the meaning of the atomic symbols, together with the grammar. ” (Schneider & Katz, 2011).

Will I ever get out of such a conceptual mess? Or should I just leave it behind to rest in the peace of countless journal articles, hidden in the archives of publishing houses that charge 30 $ for an access.

It seems that different strands of cognitive science philosophy work with quite different conceptions of representation. The computational approaches to sense perception (illustrated by David Marr’s early research) understand representations as an end product of some basic algorithmic procedures on the sense data, like retinal images.  Symbolic representations, in turn, are products of algorithmic procedures with “atomic symbols”  - if I can make any sense of the above quotation.  
 
But the same problem that can be recognised in Fodor & LePore’s formulation is present. How do we get from the representations that are constructed (computed) out of sense data to the symbolic representations (“atomic symbols”)? We cannot, as Cummins and Roth argue in a recent article.  There is a rift between the causal sequences that operate on “incoming forms"  and the “symbol manipulating operations” performed by the “central processor”.  Perceptual representations are not symbolic.  In my next post, I will summarise this line of arguments. Then I shall explore a bit closer what Schneider and Katz mean by “atomic symbols”.

Thursday 13 September 2012

Resurrection of resemblance theory



Has the Aristotelian “resemblance theory” of representation been abandoned, when we embark on a computational analysis of cognitive processes?  This question came to my mind when reading Geir Kirkebøen’s excellent article on “Descartes’ psychology of vision and cognitive science” (1998). I stumbled upon it after Fodor’s modularity book had sent me to reading texts on computational neuroscience and I came across David Marr’s influential book on vision (1982). It appeared posthumously after his death of leukaemia only at the age of 35. A reprint of the book has been issued quite recently. There are several good reviews of Marr’s conceptualisations.  Kirkebøen traces the historical antecedents of Marr’s view and demonstrates its similarity with Descartes’ theory of vision. This article, once again, demonstrates the immeasurable value of a historical approach to the theoretical issues of psychology. If you can get hold of this article, do read it.

Kirkebøen argues that Marr’s computational understanding is a restatement of Descartes’ solution to the problem, how  res extensa can become res cogitas. Descartes showed that extension (a “picture” on the retina) is transformed into mechanical movements in the optical nerve by a computational process that models two-dimensional objects like his analytical geometry modelled Euclidean forms.  This seems indeed to be the end of the “resemblance theory” of representations, as Fodor pointed out.

However, the idea that our visual perception uses algorithmic procedures to generate three-dimensional perceptibles from two-dimensional sense data does not abolish the problem of the relationship between the source and its retinal imprint. Oron Shagrir’s recent (2010) commentary on Marr’s theory is helpful here. The following quotations are terminologically difficult for us who are not trained in the language of computational neuroscience, but I believe you can recognise the point:

“In the case of edge detection, Marr (1982, 68ff.) refers to the constraint of spatial localization, which means (in this context) that the things in the world that give rise to intensity changes are spatially localized… Another pertinent physical fact is that intensity changes in the [retinal] image result from “surface discontinuities or from reflectance or illumination boundaries.” (Shagrir, 2010, p. 488)
“Marr’s explanation appeals to similarity between the internal mapping relations and external relations between the features that are being represented. The similarity is not at  the level of physical properties. After all, the physiological properties of the brain are quite different from the physical and optical properties that make up our visual field. The similarity is at a more abstract level of mathematical properties.” (p. 489)
“Thus, Marr not only demonstrates that the internal mathematical function correlates with the contingent world that we live in. He also underscores the basis of this correlation, which is a similarity of mathematical structures. This mathematically based similarity, I maintain, is the key in addressing the appropriateness problem…” (p. 489)
Shagrir does not refer to Kirkebøen’s paper and is perhaps not familiar with it. The second quotation, however, reproduces almost literally the Cartesian solution to the difference between the optic imprint and the physiological properties  of the brain. The similarity is established by mathematical modelling. To me this is a restatement of the resemblance theory. It is still about “form without matter” even if the form is not any more understood in pictorial terms. Shagrir also restates Aristotle's idea that vision is about distinguishing light from darkness.