Falandays JB, Nguyen B, Spivey MJ. Is prediction nothing more than multi-scale pattern completion of the future?
Brain Res 2021;
1768:147578. [PMID:
34284021 DOI:
10.1016/j.brainres.2021.147578]
[Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 05/28/2021] [Accepted: 06/29/2021] [Indexed: 11/18/2022]
Abstract
While the notion of the brain as a prediction machine has been extremely influential and productive in cognitive science, there are competing accounts of how best to model and understand the predictive capabilities of brains. One prominent framework is of a "Bayesian brain" that explicitly generates predictions and uses resultant errors to guide adaptation. We suggest that the prediction-generation component of this framework may involve little more than a pattern completion process. We first describe pattern completion in the domain of visual perception, highlighting its temporal extension, and show how this can entail a form of prediction in time. Next, we describe the forward momentum of entrained dynamical systems as a model for the emergence of predictive processing in non-predictive systems. Then, we apply this reasoning to the domain of language, where explicitly predictive models are perhaps most popular. Here, we demonstrate how a connectionist model, TRACE, exhibits hallmarks of predictive processing without any representations of predictions or errors. Finally, we present a novel neural network model, inspired by reservoir computing models, that is entirely unsupervised and memoryless, but nonetheless exhibits prediction-like behavior in its pursuit of homeostasis. These explorations demonstrate that brain-like systems can get prediction "for free," without the need to posit formal logical representations with Bayesian probabilities or an inference machine that holds them in working memory.
Collapse