1
|
Biao D, Umoh K, Qiguang C, Xiaole W, Ting F, Yuqian Y, Jinchao Z, Fushui L. The Role of Mindfulness Therapy in the Treatment of Chronic Pain. Curr Pain Headache Rep 2024:10.1007/s11916-024-01284-w. [PMID: 38951466 DOI: 10.1007/s11916-024-01284-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/12/2024] [Indexed: 07/03/2024]
Abstract
PURPOSE OF REVIEW Mindfulness therapy is a widely used treatment for many diseases and has been shown to improve pain-related functions. There is growing support for the use of psychotherapy in the treatment of chronic pain. While studies have shown a positive effect of mindfulness therapy, it is important to consider psychosocial factors as there are still a small number of studies that question its effectiveness. RECENT FINDINGS Based on current studies, mindfulness therapy involves cognitive factors related to chronic pain, both in terms of cognitive production and its impact on cognitive control. Psychological and neurobasic studies were reviewed to provide a deeper understanding of these components, which include thought inhibition, attention deficit, pain catastrophizing, and self-efficacy. Mindfulness therapy has the potential to normalize psychology and nerves, and increase internal and external connectivity to work networks related to stress perception, cognition, and emotion. However, further research is needed to fully understand its effects. By exploring the relationship between mindfulness therapy and chronic pain. This review provides a new avenue for future research in psychotherapy for patients with chronic pain.
Collapse
Affiliation(s)
- Deng Biao
- School of Clinical Medicine, Jiangxi University of Traditional Chinese Medicine, Nanchang, China
| | - KuyikAbasi Umoh
- School of Clinical Medicine, Jiangxi University of Traditional Chinese Medicine, Nanchang, China
| | - Cao Qiguang
- Apartment of Acupotomy and Chiropractic, Affiliated Hospital of Jiangxi University of Traditional Chinese Medicine, Nanchang, China
| | - Wang Xiaole
- Apartment of Acupotomy and Chiropractic, Affiliated Hospital of Jiangxi University of Traditional Chinese Medicine, Nanchang, China.
| | - Fang Ting
- Apartment of Acupotomy and Chiropractic, Affiliated Hospital of Jiangxi University of Traditional Chinese Medicine, Nanchang, China
| | - Yang Yuqian
- School of Clinical Medicine, Jiangxi University of Traditional Chinese Medicine, Nanchang, China
| | - Zhu Jinchao
- School of Clinical Medicine, Jiangxi University of Traditional Chinese Medicine, Nanchang, China
| | - Liu Fushui
- Apartment of Acupotomy and Chiropractic, Affiliated Hospital of Jiangxi University of Traditional Chinese Medicine, Nanchang, China.
| |
Collapse
|
2
|
Piantadosi ST. The algorithmic origins of counting. Child Dev 2023; 94:1472-1490. [PMID: 37984061 DOI: 10.1111/cdev.14031] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 09/16/2023] [Accepted: 09/19/2023] [Indexed: 11/22/2023]
Abstract
The study of how children learn numbers has yielded one of the most productive research programs in cognitive development, spanning empirical and computational methods, as well as nativist and empiricist philosophies. This paper provides a tutorial on how to think computationally about learning models in a domain like number, where learners take finite data and go far beyond what they directly observe or perceive. To illustrate, this paper then outlines a model which acquires a counting procedure using observations of sets and words, extending the proposal of Piantadosi et al. (2012). This new version of the model responds to several critiques of the original work and outlines an approach which is likely appropriate for acquiring further aspects of mathematics.
Collapse
|
3
|
Quilty-Dunn J, Porot N, Mandelbaum E. The best game in town: The reemergence of the language-of-thought hypothesis across the cognitive sciences. Behav Brain Sci 2022; 46:e261. [PMID: 36471543 DOI: 10.1017/s0140525x22002849] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Mental representations remain the central posits of psychology after many decades of scrutiny. However, there is no consensus about the representational format(s) of biological cognition. This paper provides a survey of evidence from computational cognitive psychology, perceptual psychology, developmental psychology, comparative psychology, and social psychology, and concludes that one type of format that routinely crops up is the language-of-thought (LoT). We outline six core properties of LoTs: (i) discrete constituents; (ii) role-filler independence; (iii) predicate-argument structure; (iv) logical operators; (v) inferential promiscuity; and (vi) abstract content. These properties cluster together throughout cognitive science. Bayesian computational modeling, compositional features of object perception, complex infant and animal reasoning, and automatic, intuitive cognition in adults all implicate LoT-like structures. Instead of regarding LoT as a relic of the previous century, researchers in cognitive science and philosophy-of-mind must take seriously the explanatory breadth of LoT-based architectures. We grant that the mind may harbor many formats and architectures, including iconic and associative structures as well as deep-neural-network-like architectures. However, as computational/representational approaches to the mind continue to advance, classical compositional symbolic structures - that is, LoTs - only prove more flexible and well-supported over time.
Collapse
Affiliation(s)
- Jake Quilty-Dunn
- Department of Philosophy and Philosophy-Neuroscience-Psychology Program, Washington University in St. Louis, St. Louis, MO, USA. , sites.google.com/site/jakequiltydunn/
| | - Nicolas Porot
- Africa Institute for Research in Economics and Social Sciences, Mohammed VI Polytechnic University, Rabat, Morocco. , nicolasporot.com
| | - Eric Mandelbaum
- Departments of Philosophy and Psychology, The Graduate Center & Baruch College, CUNY, New York, NY, USA. , ericmandelbaum.com
| |
Collapse
|
4
|
Abstract
It is popular in psychology to hypothesize that representations of exact number are innately determined-in particular, that biology has endowed humans with a system for manipulating quantities which forms the primary representational substrate for our numerical and mathematical concepts. While this perspective has been important for advancing empirical work in animal and child cognition, here we examine six natural predictions of strong numerical nativism from a multidisciplinary perspective, and find each to be at odds with evidence from anthropology and developmental science. In particular, the history of number reveals characteristics that are inconsistent with biological determinism of numerical concepts, including a lack of number systems across some human groups and remarkable variability in the form of numerical systems that do emerge. Instead, this literature highlights the importance of economic and social factors in constructing fundamentally new cognitive systems to achieve culturally specific goals. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
|
5
|
Verbal interference paradigms: A systematic review investigating the role of language in cognition. Psychon Bull Rev 2022; 30:464-488. [PMID: 35996045 DOI: 10.3758/s13423-022-02144-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/29/2022] [Indexed: 11/08/2022]
Abstract
This paper presents a systematic review of the empirical literature that uses dual-task interference methods for investigating the on-line involvement of language in various cognitive tasks. In these studies, participants perform some primary task X putatively recruiting linguistic resources while also engaging in a secondary, concurrent task. If performance on the primary task decreases under interference, there is evidence for language involvement in the primary task. We assessed studies (N = 101) reporting at least one experiment with verbal interference and at least one control task (either primary or secondary). We excluded papers with an explicitly clinical, neurological, or developmental focus. The primary tasks identified include categorization, memory, mental arithmetic, motor control, reasoning (verbal and visuospatial), task switching, theory of mind, visual change, and visuospatial integration and wayfinding. Overall, the present review found that covert language is likely to play a facilitative role in memory and categorization when items to be remembered or categorized have readily available labels, when inner speech can act as a form of behavioral self-cuing (inhibitory control, task set reminders, verbal strategy), and when inner speech is plausibly useful as "workspace," for example, for mental arithmetic. There is less evidence for the role of covert language in cross-modal integration, reasoning relying on a high degree of visual detail or items low on nameability, and theory of mind. We discuss potential pitfalls and suggestions for streamlining and improving the methodology.
Collapse
|
6
|
Algorithms of adaptation in inductive inference. Cogn Psychol 2022; 137:101506. [PMID: 35872374 DOI: 10.1016/j.cogpsych.2022.101506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 06/01/2022] [Accepted: 07/08/2022] [Indexed: 11/20/2022]
Abstract
We investigate the idea that human concept inference utilizes local adaptive search within a compositional mental theory space. To explore this, we study human judgments in a challenging task that involves actively gathering evidence about a symbolic rule governing the behavior of a simulated environment. Participants learn by performing mini-experiments before making generalizations and explicit guesses about a hidden rule. They then collect additional evidence themselves (Experiment 1) or observe evidence gathered by someone else (Experiment 2) before revising their own generalizations and guesses. In each case, we focus on the relationship between participants' initial and revised guesses about the hidden rule concept. We find an order effect whereby revised guesses are anchored to idiosyncratic elements of the earlier guess. To explain this pattern, we develop a family of process accounts that combine program induction ideas with local (MCMC-like) adaptation mechanisms. A particularly local variant of this adaptive account captures participants' hypothesis revisions better than a range of alternative explanations. We take this as suggestive that people deal with the inherent complexity of concept inference partly through use of local adaptive search in a latent compositional theory space.
Collapse
|
7
|
Abstract
A major goal of linguistics and cognitive science is to understand what class of learning systems can acquire natural language. Until recently, the computational requirements of language have been used to argue that learning is impossible without a highly constrained hypothesis space. Here, we describe a learning system that is maximally unconstrained, operating over the space of all computations, and is able to acquire many of the key structures present in natural language from positive evidence alone. We demonstrate this by providing the same learning model with data from 74 distinct formal languages which have been argued to capture key features of language, have been studied in experimental work, or come from an interesting complexity class. The model is able to successfully induce the latent system generating the observed strings from small amounts of evidence in almost all cases, including for regular (e.g., an , [Formula: see text], and [Formula: see text]), context-free (e.g., [Formula: see text], and [Formula: see text]), and context-sensitive (e.g., [Formula: see text], and xx) languages, as well as for many languages studied in learning experiments. These results show that relatively small amounts of positive evidence can support learning of rich classes of generative computations over structures. The model provides an idealized learning setup upon which additional cognitive constraints and biases can be formalized.
Collapse
|
8
|
Mental compression of spatial sequences in human working memory using numerical and geometrical primitives. Neuron 2021; 109:2627-2639.e4. [PMID: 34228961 DOI: 10.1016/j.neuron.2021.06.009] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Revised: 11/03/2020] [Accepted: 06/07/2021] [Indexed: 01/29/2023]
Abstract
How does the human brain store sequences of spatial locations? We propose that each sequence is internally compressed using an abstract, language-like code that captures its numerical and geometrical regularities. We exposed participants to spatial sequences of fixed length but variable regularity while their brain activity was recorded using magneto-encephalography. Using multivariate decoders, each successive location could be decoded from brain signals, and upcoming locations were anticipated prior to their actual onset. Crucially, sequences with lower complexity, defined as the minimal description length provided by the formal language, led to lower error rates and to increased anticipations. Furthermore, neural codes specific to the numerical and geometrical primitives of the postulated language could be detected, both in isolation and within the sequences. These results suggest that the human brain detects sequence regularities at multiple nested levels and uses them to compress long sequences in working memory.
Collapse
|
9
|
Piantadosi ST. The computational origin of representation. Minds Mach (Dordr) 2021; 31:1-58. [PMID: 34305318 PMCID: PMC8300595 DOI: 10.1007/s11023-020-09540-9] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2019] [Accepted: 08/29/2020] [Indexed: 01/29/2023]
Abstract
Each of our theories of mental representation provides some insight into how the mind works. However, these insights often seem incompatible, as the debates between symbolic, dynamical, emergentist, sub-symbolic, and grounded approaches to cognition attest. Mental representations-whatever they are-must share many features with each of our theories of representation, and yet there are few hypotheses about how a synthesis could be possible. Here, I develop a theory of the underpinnings of symbolic cognition that shows how sub-symbolic dynamics may give rise to higher-level cognitive representations of structures, systems of knowledge, and algorithmic processes. This theory implements a version of conceptual role semantics by positing an internal universal representation language in which learners may create mental models to capture dynamics they observe in the world. The theory formalizes one account of how truly novel conceptual content may arise, allowing us to explain how even elementary logical and computational operations may be learned from a more primitive basis. I provide an implementation that learns to represent a variety of structures, including logic, number, kinship trees, regular languages, context-free languages, domains of theories like magnetism, dominance hierarchies, list structures, quantification, and computational primitives like repetition, reversal, and recursion. This account is based on simple discrete dynamical processes that could be implemented in a variety of different physical or biological systems. In particular, I describe how the required dynamics can be directly implemented in a connectionist framework. The resulting theory provides an "assembly language" for cognition, where high-level theories of symbolic computation can be implemented in simple dynamics that themselves could be encoded in biologically plausible systems.
Collapse
|
10
|
Planton S, van Kerkoerle T, Abbih L, Maheu M, Meyniel F, Sigman M, Wang L, Figueira S, Romano S, Dehaene S. A theory of memory for binary sequences: Evidence for a mental compression algorithm in humans. PLoS Comput Biol 2021; 17:e1008598. [PMID: 33465081 PMCID: PMC7845997 DOI: 10.1371/journal.pcbi.1008598] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2020] [Revised: 01/29/2021] [Accepted: 12/01/2020] [Indexed: 01/29/2023] Open
Abstract
Working memory capacity can be improved by recoding the memorized information in a condensed form. Here, we tested the theory that human adults encode binary sequences of stimuli in memory using an abstract internal language and a recursive compression algorithm. The theory predicts that the psychological complexity of a given sequence should be proportional to the length of its shortest description in the proposed language, which can capture any nested pattern of repetitions and alternations using a limited number of instructions. Five experiments examine the capacity of the theory to predict human adults' memory for a variety of auditory and visual sequences. We probed memory using a sequence violation paradigm in which participants attempted to detect occasional violations in an otherwise fixed sequence. Both subjective complexity ratings and objective violation detection performance were well predicted by our theoretical measure of complexity, which simply reflects a weighted sum of the number of elementary instructions and digits in the shortest formula that captures the sequence in our language. While a simpler transition probability model, when tested as a single predictor in the statistical analyses, accounted for significant variance in the data, the goodness-of-fit with the data significantly improved when the language-based complexity measure was included in the statistical model, while the variance explained by the transition probability model largely decreased. Model comparison also showed that shortest description length in a recursive language provides a better fit than six alternative previously proposed models of sequence encoding. The data support the hypothesis that, beyond the extraction of statistical knowledge, human sequence coding relies on an internal compression using language-like nested structures.
Collapse
Affiliation(s)
- Samuel Planton
- Cognitive Neuroimaging Unit, CEA, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin center, Gif/Yvette, France
| | - Timo van Kerkoerle
- Cognitive Neuroimaging Unit, CEA, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin center, Gif/Yvette, France
| | - Leïla Abbih
- Cognitive Neuroimaging Unit, CEA, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin center, Gif/Yvette, France
| | - Maxime Maheu
- Cognitive Neuroimaging Unit, CEA, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin center, Gif/Yvette, France
- Université de Paris, Paris, France
| | - Florent Meyniel
- Cognitive Neuroimaging Unit, CEA, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin center, Gif/Yvette, France
| | - Mariano Sigman
- Laboratorio de Neurociencia, Universidad Torcuato Di Tella, Buenos Aires, Argentina
- CONICET (Consejo Nacional de Investigaciones Científicas y Tecnicas), Buenos Aires, Argentina
- Facultad de Lenguas y Educacion, Universidad Nebrija, Madrid, Spain
| | - Liping Wang
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
| | - Santiago Figueira
- CONICET (Consejo Nacional de Investigaciones Científicas y Tecnicas), Buenos Aires, Argentina
- Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales, Departamento de Computacion, Buenos Aires, Argentina
| | - Sergio Romano
- CONICET (Consejo Nacional de Investigaciones Científicas y Tecnicas), Buenos Aires, Argentina
- Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales, Departamento de Computacion, Buenos Aires, Argentina
| | - Stanislas Dehaene
- Cognitive Neuroimaging Unit, CEA, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin center, Gif/Yvette, France
- Collège de France, Paris, France
| |
Collapse
|
11
|
Rule JS, Tenenbaum JB, Piantadosi ST. The Child as Hacker. Trends Cogn Sci 2020; 24:900-915. [PMID: 33012688 PMCID: PMC7673661 DOI: 10.1016/j.tics.2020.07.005] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2019] [Revised: 07/13/2020] [Accepted: 07/16/2020] [Indexed: 01/29/2023]
Abstract
The scope of human learning and development poses a radical challenge for cognitive science. We propose that developmental theories can address this challenge by adopting perspectives from computer science. Many of our best models treat learning as analogous to computer programming because symbolic programs provide the most compelling account of sophisticated mental representations. We specifically propose that children's learning is analogous to a particular style of programming called hacking, making code better along many dimensions through an open-ended set of goals and activities. By contrast to existing theories, which depend primarily on local search and simple metrics, this view highlights the many features of good mental representations and the multiple complementary processes children use to create them.
Collapse
Affiliation(s)
- Joshua S Rule
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
| | - Joshua B Tenenbaum
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Steven T Piantadosi
- Department of Psychology, University of California, Berkeley, Berkeley, CA, USA
| |
Collapse
|
12
|
Tano P, Romano S, Sigman M, Salles A, Figueira S. Towards a more flexible language of thought: Bayesian grammar updates after each concept exposure. Phys Rev E 2020; 101:042128. [PMID: 32422757 DOI: 10.1103/physreve.101.042128] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2019] [Accepted: 03/16/2020] [Indexed: 06/11/2023]
Abstract
Recent approaches to human concept learning have successfully combined the power of symbolic, infinitely productive rule systems and statistical learning to explain our ability to learn new concepts from just a few examples. The aim of most of these studies is to reveal the underlying language structuring these representations and providing a general substrate for thought. However, describing a model of thought that is fixed once trained is against the extensive literature that shows how experience shapes concept learning. Here, we ask about the plasticity of these symbolic descriptive languages. We perform a concept learning experiment that demonstrates that humans can change very rapidly the repertoire of symbols they use to identify concepts, by compiling expressions that are frequently used into new symbols of the language. The pattern of concept learning times is accurately described by a Bayesian agent that rationally updates the probability of compiling a new expression according to how useful it has been to compress concepts so far. By portraying the language of thought as a flexible system of rules, we also highlight the difficulties to pin it down empirically.
Collapse
Affiliation(s)
- Pablo Tano
- Universidad de Buenos Aires, Facultad de Ciencias Exactas y Naturales, Argentina
| | - Sergio Romano
- Universidad de Buenos Aires, Facultad de Ciencias Exactas y Naturales, Argentina
- CONICET-Universidad de Buenos Aires, Instituto de Ciencias de la Computación, Argentina
| | - Mariano Sigman
- Laboratorio de Neurociencia, Universidad Torcuato Di Tella, Buenos Aires, Argentina
- CONICET (Consejo Nacional de Investigaciones Científicas y Técnicas), Argentina
- Facultad de Lenguas y Educación, Universidad Nebrija, Madrid, Spain
| | - Alejo Salles
- CONICET-Universidad de Buenos Aires, Instituto de Cálculo, Argentina
| | - Santiago Figueira
- Universidad de Buenos Aires, Facultad de Ciencias Exactas y Naturales, Argentina
- CONICET-Universidad de Buenos Aires, Instituto de Ciencias de la Computación, Argentina
| |
Collapse
|