Much of cognitive neuroscience construes cognitive capacities as involving representation in some way. Computational theories of vision, for example, typically posit structures that represent edges in the world. Neurons are often said to represent elements of their receptive fields. Despite the widespread use of representational talk in computational theorizing there is surprisingly little consensus about how such claims are to be understood. Is representational talk to be taken literally? Is it just a useful fiction? In this talk I sketch an account of the nature and function of representation in computational cognitive models that rejects both of these views. I call it a deflationary account.
Information theory is a formal theory of representation. All signals in an information-processing pipeline are, in a minimal sense, representations. Core cases of representation are those for which, furthermore, a description in information-theoretic terms is particularly illuminating---that is, those that undergo substantial source- or channel-coding.
Opponents of the new mechanistic account of scientific explanation argue that the new mechanists are committed to a ‘More Details Are Better’ claim: adding details about the mechanism always improves an explanation. Due to this commitment, the mechanistic account cannot be descriptively adequate as actual scientific explanations usually leave out details about the mechanism. In reply to this objection, defenders of the new mechanistic account have highlighted that only adding relevant mechanistic details improves an explanation and that relevance is to be determined relative to the phenomenon-to-be-explained. Craver and Kaplan (2018) provide a thorough reply along these lines specifying that the phenomena at issue are contrasts. In this paper, we will discuss Craver and Kaplan’s reply. We will argue that it needs to be modified in order to avoid three problems, i.e., what we will call the Odd Ontology Problem, the Multiplication of Mechanisms Problem, and the Ontic Completeness Problem. However, even this modification is confronted with two challenges: First, it remains unclear how explanatory relevance is to be determined for contrastive explananda within the mechanistic framework. Second, it remains to be shown as to how the new mechanistic account can avoid what we will call the ‘Vertical More Details are Better’ objection. We will provide answers to both challenges.
We represent the world in a variety of ways: through percepts, concepts, propositional attitudes, words, numerals, recordings, musical scores, photographs, diagrams, mimetic paintings, etc. Some of these representations are mental. It is customary for philosophers to distinguish two main kinds of mental representations: perceptual representation (e.g., vision, auditory, tactile) and conceptual representation. This essay presupposes a version of this dichotomy and explores the way in which a further kind of representation – procedural representation – represents. It is argued that, in some important respects, procedural representations represent differently from both purely conceptual representations and purely perceptual representations. Although procedural representations, just like conceptual and perceptual representations, involve modes of presentation, their modes of presentation are distinctively practical, in a sense which I will clarify. It is argued that an understanding of this sort of practical representation has important consequences for the debate on the nature of know-how.
In recent years, neuroscience has begun to transform itself into a “big data” enterprise with the importation of computational and statistical techniques from machine learning and informatics. In addition to their technological applications (e.g., brain-computer interfaces and early diagnosis of neuropathology), these techniques promise to advance new solutions to longstanding theoretical quandaries. Here I critically assess whether these promises will pay off, focusing on the application of multivariate pattern analysis (MVPA) to the problem of reverse inference. I argue that MVPA does not inherently provide a new answer to classical worries about reverse inference, and that the method faces pervasive interpretive problems of its own. Further, the epistemic setting of MVPA and other “decoding” methods contributes to a potentially worrisome shift towards prediction and away from explanation in fundamental neuroscience.