Much of cognitive neuroscience construes cognitive capacities as involving representation in some way. Computational theories of vision, for example, typically posit structures that represent edges in the world. Neurons are often said to represent elements of their receptive fields. Despite the widespread use of representational talk in computational theorizing there is surprisingly little consensus about how such claims are to be understood. Is representational talk to be taken literally? Is it just a useful fiction? In this talk I sketch an account of the nature and function of representation in computational cognitive models that rejects both of these views. I call it a deflationary account.