|
|
Symposia
The following unique symposia are likely to have broad influence in shaping future research. Don't miss these sessions during the meeting.
Symposium IHow can Cognitive Psychologists Ease the Spread of Misinformation and Boost the Spread of Accurate Information in Education? Organizer: Anne M. Cleary, Colorado State University, USA Date/Time: Friday, November 21, 2025 10:00 AM - 12:00 PM MST
Research in cognitive psychology on what does versus does not benefit learning often struggles to impact practices in educational settings. Practices not backed by science sometimes pervade more
successfully among educators than practices backed by science. Examples are the overselling of learning styles inventories and underselling of spaced effort in education. Turning to the science
on misinformation spread is one potential route to combatting the overselling of non-useful practices in education. At the same time, cognitive psychologists need to be careful not to oversell
specific implementations of laboratory findings that have not been thoroughly tested “in the wild” of classroom settings. In an effort to inform the achievement of the appropriate balance regarding
what exactly to sell to educators and how, this symposium brings together speakers from diverse research backgrounds ranging from research on the science of learning to the study of misinformation
spread to studying ways of carrying out real-world classroom implementations using rigorous experimental methodologies.
- Why Fuzzy Trace Theory is Necessary and Not Just Nice in Understanding and Resisting Misinformation—Valerie F. Reyna, Cornell University, USA
We discuss new research on
misinformation and a theoretical approach that challenges current approaches. Fuzzy-trace theory (FTT) integrates prior approaches, such as System 1 and 2, but goes beyond them to explain
belief, spread, and resistance to misinformation. FTT distinguishes gist representations—what makes sense—from verbatim representations—what was said. FTT’s mechanisms generalize across disciplines,
including education, relevant as both a source of misinformation and a remedy for it. Research from our lab and from others: (a) clarifies when education is protective; (b)explains how humans
and artificial intelligence (AI) spread misinformation, undermining trust; (c) documents the differing mental representations—verbatim and gist—that people take away from the same message;
and (d) demonstrates that gist powers successful AI-driven interventions to resist misinformation that are durable and transfer to superficially dissimilar content. Thus, evidence indicates
that messages that promote gist-based processing are shared, remembered, believed, and acted on differently than those that do not. Interventions are effective when they change this gist,
by changing what makes sense.
- Growth Mindset: Separating Myth from Reality—Alexander P. Burgoyne, HumRRO & Georgia Institute of Technology, USA
For the past several decades, there has been a great
deal of interest in growth mindset and its potential impact on academic achievement. Unfortunately, it appears that many of the presumed benefits of growth mindset have been overstated. In
this talk, I summarize recent developments in growth mindset research and attempt to provide an explanation for its widespread appeal. I review research on growth mindset’s antecedents and
consequences, as well as meta-analytic evidence on the effect of growth mindset interventions on academic achievement. In addition, I describe several distinct process models that have been
offered for growth mindset’s hypothesized effects and analyze these models through the lens of theory falsification. Finally, I address the elephant in the room: Why has growth mindset been
so influential in education and among lay audiences? And what, if anything, can be done to increase the validity of public perception of popular constructs in psychological science?
- Educational Psychology Information and Misinformation: Does Methodology Matter?—Daniel H. Robinson, University of Texas at Arlington, USA
Over the past 40+ years, the field
of educational psychology has experienced a continued (1)decline in the proportion of empirical articles that include intervention and experimental studies, an(2) increase in the proportion
that include observational studies, and an (3) increase in the proportion of the latter that include recommendations for practice. What are the effects of such trends in education? Robinson
and Levin (2019) recently provided many examples of educational quackery where popular and heavily promoted interventions/innovations/solutions were later discovered to be bogus and lacking
empirical support (e.g., learning styles). At the same time, there are several interventions that lack promotion but enjoy empirical support (e.g., retrieval practice).Are these two categories
of interventions, bunk science (BS) and real science (RS), related to the methodology used in the studies that evaluate them? I will present the results of a study where we compare BS and
RS interventions and whether their support relies on experimental or observational studies.
- Boundaries, Generalizability, and Translating Science to Educational Practice—Benjamin A. Motz, Indiana University Bloomington, USA
Cognitive psychologists are right to
critique educational practices that lack evidence. However, our own tendency toward universal recommendations can undermine our credibility. The challenge of translating cognitive psychology
to education lies not just in what we recommend, but also in our specification of when, where, how, and for whom those recommendations apply. Contextual details matter because teachers find
purely theoretical recommendations difficult to implement — for example, knowing that spacing improves learning doesn't help teachers structure it in practice. Additionally, implementation
details are often highly consequential: spacing can reduce perceived fluency, retrieval practice benefits depend on effort level, and prequestions can become counterproductive when overly
challenging. By articulating boundary conditions like these, we strengthen rather than undermine our recommendations. Our field's contributions to education will be most valuable when we
outline not just what works, but the specific contextual conditions under which interventions thrive — turning potential misinformation into actionable and reliable guidance for educators.
- Smartwatch Learning Reinforcements in the Wild: Using the Terracotta Tool to Rigorously Examine a Real-World Classroom Practice Implementation—Anne M. Cleary, Colorado State
University, USA
A challenge to communicating cognitive psychology with educators concerns what successful
implementations of laboratory findings should look like in real-world classroom settings. Studying real-world implementations of lab findings in the noise and lack of control of the real
world is tricky. Using the Terracotta Canvas tool, we carried out a rigorous experimental evaluation of areal-world classroom implementation of prior lab findings on smartwatches as learning
reinforcers.
We loaned smartwatches to students in a 100-level science of learning class for the semester. Students subscribed to a three-day-per-week prompting schedule and were
randomly assigned to prompting conditions via the Terracotta tool. A cross-over design was used across topic areas to ensure that all students had equal opportunity to receive all three prompt
conditions (retrieval practice prompts with feedback, restudy prompts, and the no prompt control condition) through counterbalancing across topic areas. Performance was measured by exam topic
area cluster as a function of which topics fell into which prompting conditions for each student. Topics assigned to spaced retrieval practice prompts with feedback received the highest exam
subset scores
Symposium IIArtificial Intelligence and Human Memory: Advancing Theoretical and Practical Insights Organizer: Travis Seale-Carlisle, University
of Aberdeen, UK Date/Time: Friday, November 21, 2025, 3:45-5:45 PM US MST Artificial intelligence (AI) is quickly transforming basic and applied research into human memory. This symposium brings together experts to explore AI’s role in refining theoretical models of recognition
memory as well as refining policies related to the collection of eyewitness memory evidence used in criminal proceedings. Ian Dobbins begins with a discussion of signal detection theory and uses
machine-learning classifiers to test the predictions of this foundational model. Jesse Grabman uses machine-learning techniques to source the unique diagnostic value of verbal and numeric confidence
expressed in an identification decision. Travis Seale-Carlisle compares several “glass-box” and “black-box” models used to predict the accuracy of an identification decision. Nydia Ayala uses
machine-learning techniques to assess whether simultaneous lineups are superior or inferior to sequential lineups and assess the combined diagnostic value of verbal confidence, numeric confidence,
and response time. Rachel Greenspan discusses whether lineups with AI-generated faces can mitigate the cross-race effect and serve as suitable replacements for lineups with real people. Lastly,
Chad Dodson discusses how investigators can use an AI application to improve their ability to distinguish correct from incorrect eyewitness identification decisions.
- Natural language machine learners explain a misprediction of the signal detection recognition model.—Ian Dobbins, Washington University in St Louis, USA
Signal detection
theory predicts that recognition confidence during an initial test (T1)should predict recognition of the same materials during a later test (T2). Supporting this,T1 hit confidence positively
predicted success and certainty during T2, in three studies. However, the model also predicts T1 correct rejection (CR) confidence should negatively predict future recognition because increasing
CR confidence reflects decreasing memory. Instead, a positive relation was found in the same studies. During the final study natural language justifications were collected for T2 hits
that were previously T1 CRs, and bag-of-words and large language (LL) classifiers easily distinguished recognition justifications for previously high versus low confidence CRs. For former
high confidence CRs, subjects more often reported remembering rejection decisions, particularly for items that were personally distinctive yet failed to evoke remembrance of study. Hence,
the signal detection model failed because distinctiveness heuristics support high confidence rejections while also yielding highly memorable test experiences.
- Why Do Eyewitnesses' Verbal Statements Contain Added Diagnostic Value?—Jesse Grabman, New Mexico State University, USA
When eyewitnesses make lineup identifications, police
in many countries are instructed to collect confidence statements. While previous research has focused on numeric confidence (e.g., “80% sure”), recent interest has shifted to verbal statements
(e.g., “I’m very certain”). In previous work, we have shown that verbal and numeric confidence provide non-overlapping diagnostic information (Seale-Carlisle et al., 2022). But, the cognitive
mechanisms behind this remain unclear. In this presentation, we examine three potential accounts: a single-process account (granularity), an evidence accumulation account (order effects),
and a metacognitive noise account (multiple ratings). We conclude that a single-process account is insufficient, as the grain size of the numeric scale has little effect on the diagnostic
value of verbal confidence (Seale-Carlisle et al., 2024). Contradicting the accumulation account, we find that scale order has little influence on the diagnostic value of confidence ratings.
We suggest that a metacognitive noise account is the most consistent with the evidence so far, though we propose additional studies to test another possible account (recollection sensitivity).
- A comparison of black-box and glass-box artificial intelligence applications for eyewitness identification tasks—Travis Seale-Carlisle, University of Aberdeen, Scotland, UK
This study compares the performance of black-box and glass-box artificial intelligence (AI) models in predicting the accuracy of eyewitness identifications. I use a range of predictors
such as verbal and numeric confidence, response time, and face recognition ability to evaluate a variety of interpretable bag-of-words classifiers (glass-box models) alongside deep-learning
large language models (black-box models). While black-box models are known for their powerful predictive capabilities, they often lack transparency, which limits their utility in high-stakes
legal settings. In contrast, glass-box models offer interpretability, allowing users to understand and scrutinize predictions. Results showed that both model types performed similarly in
predictive accuracy. Notably, a LASSO logistic regression model—a glass-box model—slightly outperformed the more complex black-box alternatives. This finding is significant for applications
within the criminal-legal system, where transparency and accountability are critical. Interpretable models like LASSO not only maintain competitive performance but also align better with
the ethical and practical demands of stakeholders such as police, lawyers, judges, and policymakers.
- Beyond the confidence-accuracy relation: Confidence, decision speed, and language distinguish accurate from inaccurate eyewitnesses—Nydia Ayala, Washington and Lee University,
USA
It is well established that confidence and decision speed are useful for sorting accurate from inaccurate suspect identifications. Recent research demonstrates that AI models can use
witness language to distinguish accurate from inaccurate suspect identifications made from simultaneous lineups and that the underlying scores associated with these classifications have incremental
validity over and above confidence and decision speed (Seale-Carlisle et al., 2022). In a large-sample experiment (N = 6,230) we examined whether these findings generalized to rejection decisions
and to sequential lineups. We found evidence of incremental validity. Confidence, decision speed, and language scores distinguished between accurate and inaccurate suspect identifications
and between accurate and inaccurate rejections on both simultaneous and sequential lineups. Whereas accurate decisions were associated with absolute language, inaccurate decisions were associated
with relative language. This work advances theory on why sequential presentation impairs discriminability. Relative to simultaneous lineups, sequential lineups do not increase use of absolute
judgment strategies but likely make it more difficult to determine the strongest match to memory.
- Is There a Cross-Race Effect for AI-Generated Faces?—Rachel Greenspan, University of Mississippi, USA
Generative artificial intelligence (AI) can create highly realistic looking images of people that are often indistinguishable from photos of real people. We sought to explore whether the cross-race effect, a robust finding in which people are more accurate in recognizing same-race compared to cross-race faces, replicates with AI-generated photos. In Study 1, we developed and validated a new set of stimulus materials with both real and AI-generated faces of Black and white adults. In Study 2, we tested whether recognition memory differs based on participant race (Black, white), target race (Black, white), and photo type (real, AI). Results showed a typical cross-race effect with better recognition memory for same-race compared to cross-race faces. This pattern occurred for both real and AI-generated faces. While hit rates were similar for both real and AI-generated faces, false alarm rates were not. Specifically, the false alarm rate (for both Black and white participants) for AI-generated Black faces was substantially higher than the false alarm rate for all other groups.
- A.I. assistance improves people’s ability to distinguish correct from incorrect eyewitness lineup identifications—Chad Dodson, University of Virginia, USA
Mistaken eyewitness
identification is one of the leading causes of false convictions. Improving law enforcement’s ability to identify correct identifications could have profound implications for criminal justice.
I will describe research that shows that AI-assistance can improve people’s ability to distinguish between accurate and inaccurate eyewitness lineup identifications. Participants (Experiment
1: N = 1,092, Experiment 2: N = 1,809) saw an eyewitness’s lineup identification, accompanied by the eyewitness’s verbal confidence statement (e.g., “I’m pretty sure”) and either a featural
(“I remember his eyes”), recognition (“I remember him”), or familiarity (“He looks familiar”) justification. They then judged the accuracy of the eyewitness’s identification. AI-assistance
(vs. no assistance) improved people’s ability to distinguish between correct identifications and misidentifications, but only when they evaluated lineup identifications based on recognition
or featural justifications. Discrimination of identifications based on familiarity justifications showed little improvement with AI-assistance. This project is a first step in evaluating
human-algorithm interactions before widespread use of AI-assistance by law enforcement.
Symposium IIIAttention Control in the Wild Organizer: Alexander P. Burgoyne, HumRRO & Georgia Institute of Technology, USA Date/Time: Saturday, November 22, 2025, 10:00 AM-12:00 PM US MST This symposium explores attention control—the ability to maintain focus on goal-relevant information while resisting distraction and interference—across diverse real-world contexts. Our multidisciplinary
panel bridges laboratory science with applied research to examine how attention control operates "in the wild" and predicts meaningful outcomes.
Dr. Kimberly Fenn presents groundbreaking research on sleep deprivation's selective impact on attention control. Her team's work with 133 participants demonstrates that one night of sleep deprivation
severely impairs sustained attention tasks requiring extended vigilance, while tasks emphasizing interference control show more modest effects. These findings reveal critical insights into attention
control's selective vulnerability to sleep loss.
Dr. Joe Coyne discusses his research with pilots and air traffic controllers, demonstrating that individual differences in attention control predict training outcomes even after extensive practice
on attention control tests. These results highlight the nature of individual differences in attention control abilities and their persistence despite practice.
Dr. Tim Dunn presents findings on Marines' cognitive resilience during cold-water immersion stress. His research reveals that individual differences in attention control persist even when manual
responses are not required, and that cold stress uniquely affects attention beyond physiological responses associated with skin cooling. Additionally, the Expeditionary Cognitive Science Group
will share their work on attention control in undersea robotics operations, where they found strong correlations between attention control scores and frequency of threat detection errors among
operators (N=50) during unmanned underwater vehicle training exercises—demonstrating how attention control impacts critical task performance in complex technological systems.
Dr. Gene Brewer, Dr. Matt Robison, Phil Peper, and Holly O'Rourke present their innovative 75-day "in-the-wild" assessment merging nomothetic and idiographic approaches to understanding attention
control. Their study of a small cohort (N=4) combines daily health metrics from Fitbit wearables, socioemotional surveys, and cognitive assessments to generate individualized cognitive profiles.
Their findings reveal significant within-person variability in attention control, executive functioning, and working memory, highlighting the value of idiographic analysis in extending attention
control research beyond laboratory settings.
Dr. Jason Tsukahara explores how mindfulness meditation interventions affect attentional processes, offering insights into potential remediation strategies for factors that negatively affect attention
control, such as occupational stress.
Dr. Alex Burgoyne integrates these perspectives from his position at HumRRO, examining implications for non-academic settings and real-world applications.
Together, these diverse approaches illuminate attention control's critical role across contexts and advance our understanding of individual differences in cognitive performance under challenging
conditions.
- The Selective Impact of Sleep Deprivation on Attention Control—Kimberly M. Fenn, Michigan State University, USA
This
presentation examines how sleep loss affects attention control—the ability to maintain focus while resisting distraction. In a large sample, we experimentally manipulated sleep loss; half
of our participants were sleep-deprived for 24 hours and the other half served as rested controls. All participants completed cognitive tests during evening baseline sessions and again the
following morning. Results reveal substantial performance deficits in sleep-deprived individuals during tasks requiring sustained attention, but less robust deficits in tasks assessing inhibitory
control. This work demonstrates a selective pattern of impairment: tasks demanding sustained attention show severe deterioration, while those emphasizing interference control exhibit more
modest effects. This differential impact provides critical insight into attention control mechanisms and their vulnerability to sleep loss. Our team is leveraging these findings to develop
interventions for enhancing attention and mitigating sleep-related cognitive deficits, with implications for contexts where optimal cognitive performance is essential despite challenging
sleep conditions.
- The Impact of Mindfulness Training on Sustained Attention in High-Demand Occupations—Jason Tsukahara
, University of Miami, USA
The ability to control attention is critical for success in high-stakes environments, such as military operations.
Yet, attentional control is vulnerable to decline during prolonged high-stress intervals. Emerging evidence suggests mindfulness training improves attentional performance and may mitigate
these vulnerabilities. For mindfulness training to succeed in these settings, it must be delivered with considerations of feasibility, scalability, and context relevance. Our research shows
that a short-form mindfulness-based attention training program (MBAT) can be effectively implemented with these factors in mind and can help reduce attentional vulnerabilities. This presentation
discusses findings from recent projects examining MBAT's impact on attention control and sustained attention in high-demand occupations. Participants either completed a structured MBAT program
or served as no-training controls. Attention performance was measured using tasks assessing attention control and sustained attention, with data collected before and after training.
Our findings suggest mindfulness training can protect against declines in attentional control by promoting sustained attention.
- Effectiveness of attention control as a predictor of performance in Navy Air Traffic Control school—Joe Coyne, Naval Research Laboratory,
USA
This study examined whether attention control could predict training outcomes in the US Navy’s Air Traffic Control (ATC) school. Students qualify or the school and are selected using
one of two different composites scores from the Armed Services Vocational Aptitude Battery (ASVAB). The composites to qualify for the rating are entirely based on crystalized intelligence
(i.e., math, verbal, and mechanical performance). Data were collected from 303 sailors prior to the start of the ATC school. Participants completed three short visual attention tasks and
one short auditory attention task. The measures of attention control predicted grades, number of setbacks, and graduation in the ATC school. Further they predicted training outcomes above
and beyond the variance predicted by the ASVAB. Interestingly while visual attention correlated with grades and setback, the auditory attention measure was the only one which predicted graduation.
The results demonstrate the value of measures of attention in predicting training outcomes in cognitively demanding military careers.
- Extreme Cold Stress Has Unique Effects on Vigilance Beyond Cold Hands—Timothy Dunn, Naval Health Research Center, USA
The effects of cold on cognition have been studied for 50 years through various exposure types and tasks. Cold has well-known effects on physiology and motor responses, particularly
impairing hand dexterity as blood flow diverts from extremities to maintain core temperature. Most vigilance studies during cold exposure have required manual responses (e.g., button presses),
creating a potential confound when interpreting cold's unique effects on vigilance. Our previous work demonstrated significant individual differences in psychomotor vigilance task (PVT) performance
during 10-minute cold-water immersion (CWI) in near-freezing water, with lower hand temperatures predicting worse performance. The current study investigated whether these vigilance decrements
persist when using an auditory stimulus/response PVT, eliminating manual response requirements. Results showed that individual differences in response stability and flexibility persisted
even without manual responses. Surprisingly, these differences weren't associated with hand temperature as previously observed. This suggests cold stress uniquely affects vigilance beyond
hand dexterity impairment, indicating complex mechanisms underlying cold-related attentional decrements.
- Attention Control Predicts Error Rates in Undersea Robotics Operations—Brandon Schrom, Naval Health Research Center, USA
Interaction with complex technological systems to achieve task objectives, especially in operational contexts, requires attention control—the ability to focus on goal relevant information
while resisting distraction from external events (e.g., a loud noise) and internal thoughts (e.g., thinking about yesterday) (Burgoyne & Engle, 2020). Using three minute “Squared” tasks
(Burgoyne et al.,2023), we assessed attention control in operators (N=50) during multi-day unmanned underwater vehicle training exercises. Instructors independently recorded type (mission
planning, vehicle operation, safety, and threat detection visual search) and frequency of errors. Preliminary analyses revealed strong correlations between “squared” attention control scores
and frequency of threat detection errors, with higher attention control relating to fewer errors. These findings suggest attention control impacts critical task performance in undersea robotics
operations.
- Merging Nomothetic and Idiographic Approaches to Better Understand Variation in Attention Control—Gene Brewer, University of California
- Riverside, USA
Understanding attention control variation requires examining both general trends and individual differences in real-world contexts. We conducted a 75-day assessment of four participants,
investigating relationships between cognitive performance, health, and socioemotional factors. Daily evaluations combined Fitbit metrics, surveys, and cognitive tasks measuring attention
control(psychomotor vigilance), executive functioning (flanker task), and working memory (change localization). Using Group Iterative Multiple Model Estimation, we created individualized
cognitive profiles capturing daily fluctuations in these abilities and overall well-being. Results revealed significant within-person variability and unique performance patterns, highlighting
the value of idiographic analysis in cognitive psychology. By extending research beyond laboratory settings, this study provides rich insights that can inform personalized strategies for
enhancing everyday cognitive performance. Our approach bridges nomothetic and idiographic methods to better understand how attention control operates in natural environments, offering a more
complete picture of cognitive functioning in daily life.
Symposium IVUnderstanding Spontaneous Thoughts: Methodological Approach, Clinical Relevance, and their Functional and Neural Signatures Organizer: Julia
Kam , University of Calgary, Canada Date/Time: Saturday, November 22, 2025, 1:30-3:30 PM US MST Spontaneous thought is a key aspect of the human experience. These thoughts occupy up to 50% of our waking hours and have a widespread impact on our everyday life, ranging from improving creativity
to disrupting task performance to modulating our affective well-being. In the current zeitgeist dominated by a scientific focus on externally oriented processes, the study of spontaneous cognition
faces a unique challenge: how do we study this “invisible” phenomenon and its associated functional and neural correlates? In this symposium, speakers will present state-of-the-art research on
spontaneous thought designed to tackle this challenge, focusing on uncovering the psychological and neural mechanisms underlying spontaneous thought. Our topics include establishing the varieties
of everyday thoughts and their clinical relevance (Dr. Jessica Andrews Hanna), electrophysiological correlates of naturally occurring thought patterns (Dr. Julia Kam), assessing the dynamics
of thoughts using a new methodological approach (Dr. Caitlin Mills), and uncovering the functional and neural correlates of spontaneity and automaticity in thought (Dr. Kalina Christoff Hadjiilieva). Together,
our symposium will highlight different experimental and analytic approaches that aim to set the foundation for a more comprehensive understanding of the psychological and neural basis of spontaneous
thoughts moving forward.
- Electrophysiological Correlates of Naturally Occurring Thought Patterns—Julia Kam, University of Calgary, Canada
Humans experience a continuous stream of thoughts throughout
the day. These thoughts differ based on the context in which they occur and the experiences we have, little is known about the electrophysiological signatures of these thought patterns.
To address this gap, we examined the electrophysiological correlates of thought patterns in internally(thought focus condition) and externally (video focus condition) oriented contexts, during
which they were asked to report on various dimensions of thoughts. We identified four thought patterns, which were differentially associated with the experimental conditions and EEG spectral
measures. For example, present external thought was more closely linked to the external condition and increased central alpha and posterior high beta. Internally-oriented verbal thoughts
were more strongly associated with the internal condition as well as increased centro-parietal low beta. Together, our results indicate that naturally occurring thought patterns that differentially
emerge during internally versus externally oriented naturalistic contexts are associated with unique electrophysiological patterns. They highlight the importance of considering context when
assessing ongoing thoughts.
- Assessing Thought Dynamics Using the Thinking Grid—Caitlin Mills, University of Minnesota, USA
The study of thoughts has largely been focused on task-centric experiences
such as stimulus-independent thought, measured on one-dimensional scales, which often fail to capture the full spectrum of our thought dynamics. We propose a two-dimensional tool called the
Thinking Grid (TG) which measures thought along two different axes - executive control and salience -- and validated the measure across three studies. Participants in Study 1 read curated
vignettes in which a proxy experienced different thought types. Participants correctly identified the type of thought and reported it on the grid. In Study 2,participants watched a video
in VR in two constraint conditions; the TG was better able to capture differences in thought across these conditions compared to unidimensional questions. Finally, in Study 3, participants
were induced to experience positive or negative valence. Both positive and negative valence mood inductions increased salience whereas neutral valence increased focus. Together, these results
suggest that participants can classify their streams of consciousness on the grid, TG can capture thought dynamics that elude typical unidimensional scales, and TG has predictive validity
when looking at its relation to valence.
- Varieties of Everyday Thoughts across the Lifespan—Jessica Andrews-Hanna, University of Arizona, USA
Although the last several decades have brought major advancements in
our understanding of internally-guided cognition, much of our knowledge about humans’ inner mental life is derived from experimentally-prompted studies conducted in laboratory environments.
In contrast, the literature has yet to paint a full picture of what people think about in everyday life, as well as when such thoughts occur, and how such thoughts vary between
individuals. In this talk, I will provide an overview of our laboratory’s recent efforts to quantify the varieties of everyday thought across the adult lifespan, and the clinical relevance
of such thoughts. By presenting data from 4000+ adults derived from our lab’s recent Ecological Momentary Assessment app, I will review findings that both corroborate and contradict those
observed in traditional laboratory-based studies. I will emphasize the practical and predictive utility of measuring thoughts in real-world contexts and conclude by raising challenges to
some of the literature’s most widely-accepted findings emerging from laboratory-based research
- Spontaneity and Automaticity in Thought in relation to Precision-Defined Default Network Subsystems—Kalina Christoff Hadjiilieva, University of British Columbia, Canada
Recent precision-fMRI
findings reveal two distinct default networks (DNs). How do these DNs relate to spontaneity and automaticity in thought, as distinguished within the Dynamic Framework of Thought? Our findings
with highly experienced meditators suggest that subcortical and cortical interactions may be a distinctive feature of spontaneous thoughts. We observed a spread of activation from subcortical
DN-A structures to cortical DN regions, during the seconds preceding subjective reports of spontaneously arising thoughts in highly experienced meditators. Less experienced meditators report
fewer spontaneously arising thoughts and do not reliably recruit subcortical DN-A regions prior to spontaneous thought reports, although they show robust recruitment of cortical DN regions.
Our precision fMRI data reveal differential recruitment across DN subsystems, with these subsystems interacting more strongly during automatically constrained thought than during spontaneous
thought. Overall, the distinction between automatically constrained and spontaneous thought may explain distinct features of brain network organization and contribute to improving understanding
of clinically-relevant alterations of thought such as rumination and anxiety.
Questions?
Contact Member Services. Office Hours: Monday through Friday, 8:30 a.m. to 5:00 p.m. CT (U.S. Central Time)
|
|
|
|