Hallucinations Under Psychedelics and in the Schizophrenia Spectrum

An Interdisciplinary and Multiscale Comparison

Pantelis Leptourgos; Martin Fortier-Davy; Robin Carhart-Harris; Philip R. Corlett; David Dupuis; Adam L. Halberstadt; Michael Kometer; Eva Kozakova; Frank LarØi; Tehseen N. Noorani; Katrin H. Preller; Flavie Waters; Yuliya Zaytseva; Renaud Jardri


Schizophr Bull. 2020;46(6):1396-1408. 

In This Article

Computational Modeling

In previous sections, we described psychedelic experiences and contrasted them with psychotic experiences in SCZs. We notably focused on the potential neural mechanisms that may support those experiences, both at the level of synapses (pharmacology) and networks (brain-imaging). Then, we explored the first-person experience (phenomenology) and described how it can be shaped by the social and cultural milieu (anthropology). Despite such a multi-scale approach, our endeavor would be incomplete without discussing the links between them. Besides, another relevant question remains open: Could hallucinations with different phenomenology and neurobiology be underlain by (partially) similar mechanisms? To address those questions, we turn to the burgeoning field of computational psychiatry[131] and discuss how information processing might hold the key to both answers.

Computational models conceive the brain as an information processing system and provide normative accounts of those processes, which are then mapped onto existing neural structures.[131] We will focus on one particular type of computational models: Bayesian models.[132,133] The main idea behind this framework is that the brain learns generative models, that is, internal, hierarchical representations of the causal structure of the world.[134,135] When new inputs enter the system through the sensors, they are combined with prior information (accumulated knowledge which might include expectations, memories, etc.) to generate predictions about the causes of the sensory input. In short, Bayesian models conceptualize the brain as an inference machine that tests multiple hypotheses about the state of the world, the body or the brain itself and picks the most probable one. We will summarize Bayesian theories that situate the synaptic disconnections implicit in the neuropharmacology of psychedelics (and hallucinations) in the larger context of abnormal functional and effective connectivity studies reviewed above. The basic premise rests on linking false (perceptual) inference to disconnections or disintegration of the psyche (in the sense of Bleuler), conceiving of hallucinations as aberrant perceptual inference due to abnormal belief updating, particularly in terms of how abnormal synaptic connectivity can lead to false inference via inappropriate weighting of sensory evidence and prior beliefs. This inappropriate weighting, via neuromodulation, could underwrite hallucinations in both SCZs and psychedelic states.

Inference can be implemented in various ways. According to predictive coding136 (ie, Kalman filtering[137,138] or variational free-energy minimization[139]), new sensory inputs are constantly explained-away by inhibitory feedback signals (sent from higher level areas to lower level areas, that "modulate" sensory inputs according to the behavioral context; ie, predictions; Figure 1a). When predictions cannot fully explain the input, a residual error-signal (ie, prediction error [PE]) is sent up in the hierarchy to update the dominant hypothesis (belief), thereby reducing surprise (or surprisal). Conversely, when predictions and inputs match, no PE is generated and thus, the current model is sustained. It is worth noting that, under certain formulations, surprise can also be minimized by appropriate action (active sampling of the environment, ie, active inference[140]), also explaining exploratory behavior and long-term minimization of PE.[141] Crucially, both predictions and inputs are weighted according to their reliability (parameter k in Figure 1; Kalman gain), resulting in precision-weighted PE. In one of the first articles to suggest a computational account of psychedelics, Corlett and colleagues suggested that psychedelics act by increasing the prior weight (thus decreasing k), which results in inferences being mainly driven by expectations (Figure 1b).[142] The group also suggested a tentative neural mechanism for this prior overweighting, namely "excessive AMPA-receptor signaling, in the absence of NMDA-receptor impairment." Importantly, it has been argued that the same mechanism might underlie hallucinations in SCZs,[143–145] with a recent study validating this theory and, additionally, providing evidence for over-weighted priors in a group of nonclinical voice hearers.[146] Taken together, those theories and evidence suggest that hallucinations might reflect the same underlying computational mechanism, regardless of the exhibited phenomenology or clinical context.

Figure 1.

Illustration of different Bayesian models of hallucinations. (a–c) The predictive coding framework. (d–f) The circular inference framework. X, hidden cause; S, sensory variable; x and s, predictions and sensory messages; s-x, prediction error; k, relative weight of inputs as compared to predictions (Kalman gain).

The idea that serotonergic agonists increase prior weight is not unanimously accepted. In a recent article, Carhart-Harris and Friston suggested that the opposite might also be true, namely a relaxation of the priors that increases k (Figure 1c).[147] Their REBUS theory explains, among other things, the potential therapeutic effects of psychedelics (eg, in depressive disorders), mediated by a relaxation of pathological priors associated with those illnesses. Intriguingly, although the REBUS and the strong-prior theory seem at first sight incompatible, this is not necessarily true. In particular, priors, can be both over- and under-weighted, but at different levels in the cortical hierarchy, for example, weak low-level priors (high k) might be compensated by stronger high-level priors (low k).[148]

Although predictive coding is a powerful inference scheme, it is not the only one. For example, one could replace inhibitory priors with excitatory priors, resulting in a closely related algorithm in which beliefs are not updated by error-signals, but by the sensory inputs per se (Belief Propagation [BP]; Figure 1d). Despite its generality and simplicity, BP postulates recurrent, excitatory connections. Without well-tuned control mechanisms (eg, inhibitory control), it results in information loops, a form of "run-away excitation" where beliefs are erroneously amplified and the feed-forward (input) and feedback (prediction) messages become aberrantly correlated (Circular Inference[149,150]). There are two types of loops: descending (overcounted priors; Figure 1e) and ascending (overcounted inputs; Figure 1f). Importantly, different loops result in different types of aberrant percepts: while ascending loops induce unimodal hallucinations (eg, AH in SCZs), descending loops give rise to multisensory phenomena (eg, synesthesia-like experiences; MMH induced by DMT).[151] Although the former link between ascending loops and SCZs has already been empirically established,[152] the latter between descending loops and psychedelics remains purely theoretical and still needs experimental support.