The predictive processing theory of action, cognition, and perception is one of the most influential approaches to unifying research in cognitive science. But it is somewhat difficult to say how to test this theory empirically and whether it is even possible at all. In this paper, we argue that principles, general theories and particular models driven by PP can be only partially tested or falsified. In some cases, there is no sufficient detail to make falsification possible, and when details are present, the models, theories and principles seem often implausible or outright false. Moreover, if one assumes that proper explanations should be mechanistic, several important lessons for PP can be drawn. As we will argue, current research practice is nonetheless far from the normative ideals for mechanistic explanations and cognitive modeling in general.
Contrary to current rumors that there is something suspicious about the notion of mental representation, I am persuaded that the description of “intentional icons” and of “representations” first presented in my Language, Thought and Other Biological Categories (1984) captures a central and also a remarkably simple causal-explanatory principle that is involved in the workings of perception, cognition and language. So I am going to return to this description, highlighting its outlines to bring out its simplicity and also, I hope, the obviousness and innocuous nature of this principle. I will add a few words about "intensionality" and why it is irrelevant to the naturalization of mental representation.
Much of cognitive neuroscience construes cognitive capacities as involving representation in some way. Computational theories of vision, for example, typically posit structures that represent edges in the world. Neurons are often said to represent elements of their receptive fields. Despite the widespread use of representational talk in computational theorizing there is surprisingly little consensus about how such claims are to be understood. Is representational talk to be taken literally? Is it just a useful fiction? In this talk I sketch an account of the nature and function of representation in computational cognitive models that rejects both of these views. I call it a deflationary account.
Information theory is a formal theory of representation. All signals in an information-processing pipeline are, in a minimal sense, representations. Core cases of representation are those for which, furthermore, a description in information-theoretic terms is particularly illuminating---that is, those that undergo substantial source- or channel-coding.
Opponents of the new mechanistic account of scientific explanation argue that the new mechanists are committed to a ‘More Details Are Better’ claim: adding details about the mechanism always improves an explanation. Due to this commitment, the mechanistic account cannot be descriptively adequate as actual scientific explanations usually leave out details about the mechanism. In reply to this objection, defenders of the new mechanistic account have highlighted that only adding relevant mechanistic details improves an explanation and that relevance is to be determined relative to the phenomenon-to-be-explained. Craver and Kaplan (2018) provide a thorough reply along these lines specifying that the phenomena at issue are contrasts. In this paper, we will discuss Craver and Kaplan’s reply. We will argue that it needs to be modified in order to avoid three problems, i.e., what we will call the Odd Ontology Problem, the Multiplication of Mechanisms Problem, and the Ontic Completeness Problem. However, even this modification is confronted with two challenges: First, it remains unclear how explanatory relevance is to be determined for contrastive explananda within the mechanistic framework. Second, it remains to be shown as to how the new mechanistic account can avoid what we will call the ‘Vertical More Details are Better’ objection. We will provide answers to both challenges.