Functional neuroimaging is sometimes criticized as having nothing of interest to offer those interested in psychology: it is only concerned with where in the brain things happen, not how they happen. Although this criticism has never been a valid one, novel analytical methods increasingly make clear that imaging can give us access to constructs of interest to psychology. In this paper I argue that neuroimaging can give us an important, if limited, window into the large-scale structure of neural representation. I describe representational Similarity Analysis, increasingly used in neuroimaging studies, and lay out desiderata for representations in general. In that context I discuss what RSA can and cannot tell us about neural representation. I compare it to a different experiment which has been embraced as indicative of representation by psychology, and argue that it compares favorably.
Neuroscience has become increasingly reliant on multi-subject research in addition to studies of unusual single patients. This research has brought with it a challenge: how are data from different human brains to be combined? The dominant strategy for aggregating data across brains is what I call ‘the cartographic approach’, which involves mapping data from individuals to a spatial template. Here I characterize the cartographic approach and argue that one of its key steps, registration, should be carried out in a way that is sensitive to the target of investigation. Because registration aims to align homologous brain locations, but not all homologous locations can be simultaneously aligned, a multiplicity of registration methods is required to meet the needs of researchers investigating different phenomena. I call this position ‘registration pluralism’. Registration pluralism has potential implications for neuroscientific practice, three of which I discuss here. This work shows the importance of reflecting more carefully on data aggregation methods, especially in light of the substantial individual differences that exist between brains.
The predictive processing theory of action, cognition, and perception is one of the most influential approaches to unifying research in cognitive science. But it is somewhat difficult to say how to test this theory empirically and whether it is even possible at all. In this paper, we argue that principles, general theories and particular models driven by PP can be only partially tested or falsified. In some cases, there is no sufficient detail to make falsification possible, and when details are present, the models, theories and principles seem often implausible or outright false. Moreover, if one assumes that proper explanations should be mechanistic, several important lessons for PP can be drawn. As we will argue, current research practice is nonetheless far from the normative ideals for mechanistic explanations and cognitive modeling in general.
Contrary to current rumors that there is something suspicious about the notion of mental representation, I am persuaded that the description of “intentional icons” and of “representations” first presented in my Language, Thought and Other Biological Categories (1984) captures a central and also a remarkably simple causal-explanatory principle that is involved in the workings of perception, cognition and language. So I am going to return to this description, highlighting its outlines to bring out its simplicity and also, I hope, the obviousness and innocuous nature of this principle. I will add a few words about "intensionality" and why it is irrelevant to the naturalization of mental representation.
Much of cognitive neuroscience construes cognitive capacities as involving representation in some way. Computational theories of vision, for example, typically posit structures that represent edges in the world. Neurons are often said to represent elements of their receptive fields. Despite the widespread use of representational talk in computational theorizing there is surprisingly little consensus about how such claims are to be understood. Is representational talk to be taken literally? Is it just a useful fiction? In this talk I sketch an account of the nature and function of representation in computational cognitive models that rejects both of these views. I call it a deflationary account.