I will present a case study from neurophysiology in this paper. The case study concerns a causal question about what drives the electrical potential (a.k.a membrane or action potential) of a specific neuron. This causal question is commonly asked in molecular neuroscience and has been discussed in detail by some philosophers of neuroscience (Craver, 2007). But my case study addresses the causal question from neurophysiology, which mainly focuses on investigating the electrical properties of neurons, not their chemical or genetic properties. In this paper, I aim to use this case study to point out various puzzles in their causal investigative strategies. Having done that, I will adopt some existing philosophical tools and demonstrate how to evaluate the success of the relevant causal investigative strategies in the case study. To this end, the paper will proceed as follows. In section 2, I will present the detail of the case study by organizing them into five components. In the course of analyzing these five components, I will point out the puzzles regarding their causal investigative strategies. In section 3, I will review some existing philosophical tools for evaluating the success of investigative strategies in biological sciences. I will specifically focus on Craver and Darden (2013) and Potochnik (2017). I aim to integrate tools from these philosophers in order to propose a more powerful toolbox for evaluating the success of causal investigative strategies in neurophysiology. In section 4, I will use the proposed toolbox to demonstrate how to evaluate the success of causal investigative strategies in the case study
An adequate explication of miscomputation should do justice to the practices involved in the computational sciences. As elevant practices outside computer science have so far been overlooked, I begin to fill this gap by distinguishing different notions of miscomputation in computational psychiatry. I argue that a satisfactory explication ofmiscomputation in computational psychiatry should essentially appeal to semantic properties that characterise the interaction between a target computing system and its environment. Any account of physical computation that does not appeal to semantics for explicating miscomputation in psychiatric illness is inadequate.
Hohwy et al.’s (2008) ‘epistemological’ explanation of binocular rivalry is taken as a classic illustration of predictive coding’s ubiquity and explanatory power. I revisit the account and show that it cannot explain a core feature of binocular rivalry, namely, perceptual dominance in rewarded conditions. A more recent version of Bayesian model averaging, known as Variational Bayes, can account for the role of reward in rivalry by recasting it as a form of optimism bias. However, I argue that if we accept this modified account, we must revise our understanding of perception as a neutral, informational or ‘theoretical’ process in the mind.
Optogenetics makes possible the control of neural activity with light. In this paper, I explore how the development of this experimental tool has brought about methodological and theoretical advances in the neurobiological study of memory. I begin with Semon’s (1921) distinction between the engram and the ecphory, explaining how these concepts present a methodological challenge to investigating memory. Optogenetics provides a way to intervene into the engram without the ecphory that, in turn, opens up new means for testing theories of memory error. I focus on a series of experiments where optogenetics is used to study false memory and forgetting. I conclude with discussion of the recent discovery of “silent engrams” (e.g., Roy, Muralidhar, Smith, & Tonegawa, 2017) using optogenetics and the way in which these results create further opportunities and challenges for engram theory.