Other Sites:
Robert J. Robbins is a biologist, an educator, a science administrator, a publisher, an information technologist, and an IT leader and manager who specializes in advancing biomedical knowledge and supporting education through the application of information technology. More About: RJR | OUR TEAM | OUR SERVICES | THIS WEBSITE
RJR: Recommended Bibliography 04 Dec 2025 at 01:40 Created:
Brain-Computer Interface
Wikipedia: A brain–computer interface (BCI), sometimes called a neural control interface (NCI), mind–machine interface (MMI), direct neural interface (DNI), or brain–machine interface (BMI), is a direct communication pathway between an enhanced or wired brain and an external device. BCIs are often directed at researching, mapping, assisting, augmenting, or repairing human cognitive or sensory-motor functions. Research on BCIs began in the 1970s at the University of California, Los Angeles (UCLA) under a grant from the National Science Foundation, followed by a contract from DARPA. The papers published after this research also mark the first appearance of the expression brain–computer interface in scientific literature. BCI-effected sensory input: Due to the cortical plasticity of the brain, signals from implanted prostheses can, after adaptation, be handled by the brain like natural sensor or effector channels. Following years of animal experimentation, the first neuroprosthetic devices implanted in humans appeared in the mid-1990s. BCI-effected motor output: When artificial intelligence is used to decode neural activity, then send that decoded information to some kind of effector device, BCIs have the potential to restore communication to people who have lost the ability to move or speak. To date, the focus has largely been on motor skills such as reaching or grasping. However, in May of 2021 a study showed that an AI/BCI system could be use to translate thoughts about handwriting into the output of legible characters at a usable rate (90 characters per minute with 94% accuracy).
Created with PubMed® Query: (bci OR (brain-computer OR brain-machine OR mind-machine OR neural-control interface) NOT 26799652[PMID] ) NOT pmcbook NOT ispreviousversion
Citations The Papers (from PubMed®)
RevDate: 2025-12-03
Microglial phagoptosis in development, health, and disease.
Neurobiology of disease pii:S0969-9961(25)00428-0 [Epub ahead of print].
Microglial phagoptosis, defined as the phagocytosis of a viable cell by microglia that ultimately causes the death of the engulfed cell, has emerged as a pivotal process in sculpting neural circuits within the central nervous system (CNS). Essential for neurodevelopmental circuit refinement and ongoing tissue homeostasis, this process relies on dynamic molecular cues that direct microglia to specific cellular substrates. Physiologically, phagoptosis contributes to neural circuit refinement and cell number regulation during development; however, its dysregulation can drive neurodevelopmental and neurodegenerative disorders via aberrant cell removal. Recent advances have elucidated the distinct signaling pathways involved in target recognition and engulfment, revealing the dual roles of microglial phagoptosis in both CNS health and disease. Deeper mechanistic insight into this process offers new therapeutic opportunities for conditions characterized by defective or excessive cell clearance. This review summarizes current progress, highlights unresolved challenges, and discusses future perspectives on targeting microglial phagoptosis for intervention in CNS disorders.
Additional Links: PMID-41338361
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41338361,
year = {2025},
author = {Li, Y and Chen, S and Liu, YJ},
title = {Microglial phagoptosis in development, health, and disease.},
journal = {Neurobiology of disease},
volume = {},
number = {},
pages = {107211},
doi = {10.1016/j.nbd.2025.107211},
pmid = {41338361},
issn = {1095-953X},
abstract = {Microglial phagoptosis, defined as the phagocytosis of a viable cell by microglia that ultimately causes the death of the engulfed cell, has emerged as a pivotal process in sculpting neural circuits within the central nervous system (CNS). Essential for neurodevelopmental circuit refinement and ongoing tissue homeostasis, this process relies on dynamic molecular cues that direct microglia to specific cellular substrates. Physiologically, phagoptosis contributes to neural circuit refinement and cell number regulation during development; however, its dysregulation can drive neurodevelopmental and neurodegenerative disorders via aberrant cell removal. Recent advances have elucidated the distinct signaling pathways involved in target recognition and engulfment, revealing the dual roles of microglial phagoptosis in both CNS health and disease. Deeper mechanistic insight into this process offers new therapeutic opportunities for conditions characterized by defective or excessive cell clearance. This review summarizes current progress, highlights unresolved challenges, and discusses future perspectives on targeting microglial phagoptosis for intervention in CNS disorders.},
}
RevDate: 2025-12-03
CmpDate: 2025-12-03
Developing Lightweight Models with Data Optimization for Attending Speaker Identity from EEG without Spatial Information.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-4.
Spatial auditory attention decoding (Sp-AAD) holds great promise for brain-computer interfaces (BCIs). However, studies have shown that the high performance of Sp-AAD relies heavily on eye gaze artifacts rather than actual auditory attention features. For this reason, this study focuses on verifying whether EEG signals contain sufficient discriminative features for attending target speaker identity without eye gaze artifacts. In this study, we proposed an EEG-Mixup data optimization method to suppress trial-specific features in EEG data by adjusting the data distribution and generating soft labels through linear interpolation. In addition, a lightweight EEG-MLP model containing only 2.5k parameters was designed, which showed significant advantages over the latest SOTA model (DenseNet-3D) in cross-trial scenarios. It is shown that the model's generalization ability can be significantly improved by optimizing the data without increasing the data volume; meanwhile, the lightweight model demonstrates higher computational efficiency and inference speed in specific tasks. This study provides important theoretical and practical references for future optimization applications of BCI systems.Clinical Relevance- This study demonstrates the potential of lightweight EEG-based methods for attending target speaker identity without relying on eye gaze artifacts, providing a foundation for future auditory brain-computer interface systems.
Additional Links: PMID-41337436
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41337436,
year = {2025},
author = {Ding, Y and Wang, L and Wang, X and Chen, F},
title = {Developing Lightweight Models with Data Optimization for Attending Speaker Identity from EEG without Spatial Information.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-4},
doi = {10.1109/EMBC58623.2025.11253106},
pmid = {41337436},
issn = {2694-0604},
mesh = {*Electroencephalography/methods ; Humans ; *Brain-Computer Interfaces ; Algorithms ; *Attention/physiology ; Signal Processing, Computer-Assisted ; Male ; Adult ; Artifacts ; },
abstract = {Spatial auditory attention decoding (Sp-AAD) holds great promise for brain-computer interfaces (BCIs). However, studies have shown that the high performance of Sp-AAD relies heavily on eye gaze artifacts rather than actual auditory attention features. For this reason, this study focuses on verifying whether EEG signals contain sufficient discriminative features for attending target speaker identity without eye gaze artifacts. In this study, we proposed an EEG-Mixup data optimization method to suppress trial-specific features in EEG data by adjusting the data distribution and generating soft labels through linear interpolation. In addition, a lightweight EEG-MLP model containing only 2.5k parameters was designed, which showed significant advantages over the latest SOTA model (DenseNet-3D) in cross-trial scenarios. It is shown that the model's generalization ability can be significantly improved by optimizing the data without increasing the data volume; meanwhile, the lightweight model demonstrates higher computational efficiency and inference speed in specific tasks. This study provides important theoretical and practical references for future optimization applications of BCI systems.Clinical Relevance- This study demonstrates the potential of lightweight EEG-based methods for attending target speaker identity without relying on eye gaze artifacts, providing a foundation for future auditory brain-computer interface systems.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Electroencephalography/methods
Humans
*Brain-Computer Interfaces
Algorithms
*Attention/physiology
Signal Processing, Computer-Assisted
Male
Adult
Artifacts
RevDate: 2025-12-03
CmpDate: 2025-12-03
Tri-Model Integration: Advancing Breast Cancer Immunohistochemical Image Generation through Multi-Method Fusion.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-6.
Immunohistochemical (IHC) staining is a crucial technique for diagnosing and formulating treatment plans for breast cancer, particularly by evaluating the expression of biomarkers like human epidermal growth factor receptor-2. However, the high cost and complexity of IHC staining procedures have driven research toward generating IHC-stained images directly from more readily available Hematoxylin and Eosin-stained images using image-to-image (I2I) translation methods. In this work, we propose a novel approach that combines the predictive capabilities of three state-of-the-art I2I models to enhance the quality and reliability of synthetic IHC images. Specifically, we designed a Convolutional Neural Network that takes as input a four-dimensional input comprising the outputs of three distinct models (each contributing an IHC prediction, which is an RGB three-dimensional output for each) and produces a final consensus image through a fusion mechanism. This ensemble method leverages the strengths of each model, leading to more robust and accurate IHC image generation. Extensive experiments on the BCI dataset demonstrate that our approach outperforms existing single-model methods, achieving superior Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) metrics. All of our code is available at: https://github.com/arshamhaq/BCI-fusion.Clinical RelevanceImproving the quality of synthetic IHC images can potentially reduce costs and streamline the diagnostic process, ultimately benefiting patient outcomes.
Additional Links: PMID-41337381
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41337381,
year = {2025},
author = {Haqiqat, A and Karimi, N and Mirmahboub, B and Sobhaninia, Z and Shirani, S and Samavi, S},
title = {Tri-Model Integration: Advancing Breast Cancer Immunohistochemical Image Generation through Multi-Method Fusion.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-6},
doi = {10.1109/EMBC58623.2025.11252716},
pmid = {41337381},
issn = {2694-0604},
mesh = {Humans ; *Breast Neoplasms/diagnostic imaging/metabolism/diagnosis ; Female ; *Immunohistochemistry/methods ; *Image Processing, Computer-Assisted/methods ; Neural Networks, Computer ; Algorithms ; Reproducibility of Results ; },
abstract = {Immunohistochemical (IHC) staining is a crucial technique for diagnosing and formulating treatment plans for breast cancer, particularly by evaluating the expression of biomarkers like human epidermal growth factor receptor-2. However, the high cost and complexity of IHC staining procedures have driven research toward generating IHC-stained images directly from more readily available Hematoxylin and Eosin-stained images using image-to-image (I2I) translation methods. In this work, we propose a novel approach that combines the predictive capabilities of three state-of-the-art I2I models to enhance the quality and reliability of synthetic IHC images. Specifically, we designed a Convolutional Neural Network that takes as input a four-dimensional input comprising the outputs of three distinct models (each contributing an IHC prediction, which is an RGB three-dimensional output for each) and produces a final consensus image through a fusion mechanism. This ensemble method leverages the strengths of each model, leading to more robust and accurate IHC image generation. Extensive experiments on the BCI dataset demonstrate that our approach outperforms existing single-model methods, achieving superior Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) metrics. All of our code is available at: https://github.com/arshamhaq/BCI-fusion.Clinical RelevanceImproving the quality of synthetic IHC images can potentially reduce costs and streamline the diagnostic process, ultimately benefiting patient outcomes.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Breast Neoplasms/diagnostic imaging/metabolism/diagnosis
Female
*Immunohistochemistry/methods
*Image Processing, Computer-Assisted/methods
Neural Networks, Computer
Algorithms
Reproducibility of Results
RevDate: 2025-12-03
CmpDate: 2025-12-03
A Brain Switch for SSVEP-Based BCI Speller Using an RNN-Based Detection Approach.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-5.
Steady-state visual evoked potentials (SSVEP)-based brain-computer interface (BCI) systems are used commonly as spellers because they have high information transfer rate and high accuracy relative to other BCI paradigms. Asynchronous BCI systems allow users to input commands whenever they wish to use them, which may make these systems more realistic and practical than synchronous systems. In contrast, asynchronous BCIs, known as the Brain Switch, require robust mechanisms to detect users' intentions accurately while maintaining classification performance. This highlights the need for a BCI system that distinguishes users' intentions reliably. SSVEP paradigms often show variability in their frequency designs. In this study, we propose a two-stage asynchronous BCI system that combines a robust brain switch model that uses autocorrelation and Long Short-Term Memory (LSTM)) for detection and an EEGNet-based classifier. Our proposed system was evaluated using a 40-class SSVEP dataset involving 40 subjects. It achieved an impressive detection performance with a sensitivity (SEN) of 98.24 ± 2.21% and specificity (SPC) of 82.28 ± 11.63% for even 1-second epochs. Further, the system attained a classification accuracy (ACC) of 77.05 ± 14.95%. This model demonstrates significant potential to help develop more realistic and practical asynchronous BCI systems.
Additional Links: PMID-41337376
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41337376,
year = {2025},
author = {Kim, H and Ahn, M and Jun, SC},
title = {A Brain Switch for SSVEP-Based BCI Speller Using an RNN-Based Detection Approach.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-5},
doi = {10.1109/EMBC58623.2025.11252734},
pmid = {41337376},
issn = {2694-0604},
mesh = {Humans ; *Brain-Computer Interfaces ; *Evoked Potentials, Visual/physiology ; Electroencephalography/methods ; *Brain/physiology ; Algorithms ; Signal Processing, Computer-Assisted ; Male ; Adult ; *Neural Networks, Computer ; Female ; },
abstract = {Steady-state visual evoked potentials (SSVEP)-based brain-computer interface (BCI) systems are used commonly as spellers because they have high information transfer rate and high accuracy relative to other BCI paradigms. Asynchronous BCI systems allow users to input commands whenever they wish to use them, which may make these systems more realistic and practical than synchronous systems. In contrast, asynchronous BCIs, known as the Brain Switch, require robust mechanisms to detect users' intentions accurately while maintaining classification performance. This highlights the need for a BCI system that distinguishes users' intentions reliably. SSVEP paradigms often show variability in their frequency designs. In this study, we propose a two-stage asynchronous BCI system that combines a robust brain switch model that uses autocorrelation and Long Short-Term Memory (LSTM)) for detection and an EEGNet-based classifier. Our proposed system was evaluated using a 40-class SSVEP dataset involving 40 subjects. It achieved an impressive detection performance with a sensitivity (SEN) of 98.24 ± 2.21% and specificity (SPC) of 82.28 ± 11.63% for even 1-second epochs. Further, the system attained a classification accuracy (ACC) of 77.05 ± 14.95%. This model demonstrates significant potential to help develop more realistic and practical asynchronous BCI systems.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Brain-Computer Interfaces
*Evoked Potentials, Visual/physiology
Electroencephalography/methods
*Brain/physiology
Algorithms
Signal Processing, Computer-Assisted
Male
Adult
*Neural Networks, Computer
Female
RevDate: 2025-12-03
CmpDate: 2025-12-03
Neural Dynamics in Imagined Speech: A Spatiotemporal Analysis Based on EEG Source Localization and Functional Connectivity.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-5.
Communication is a crucial part of daily life. However, patients with speech disorders may have difficulty communicating with the outside world and, in severe cases, may even completely lose the ability to speak. Imagined speech is an intrinsic speech activity that does not explicitly move any vocal organs, which has emerged as a promising avenue for brain-computer interface (BCI) research. In this study, we developed a novel experimental paradigm tailored to imagined speech tasks based on Chinese characters and collected participants' high-temporal-resolution electroencephalogram (EEG) data. Using dynamic statistical parametric mapping (dSPM), we delineated the spatial distribution of neural activation, while functional connectivity was quantified through phase-locking value (PLV) analysis to capture the temporal interplay between distinct brain regions. We introduced a novel spatiotemporal feature representation, termed information flow (IF), by segmenting the imagined speech process into 10 continuous temporal windows, we systematically analyzed the evolution of global and local information flow dynamics. The results revealed distinct spatiotemporal patterns of neural activation and functional connectivity, underscoring the coordinated interaction of critical brain regions involved in the process of imagined speech, which help to elucidate the spatiotemporal dynamics of imagined speech and provide valuable insights into its underlying neural mechanisms. This work provides a foundation for advancing speech BCI applications and contributes to understanding the cognitive and neural bases of imagined speech in Chinese.
Additional Links: PMID-41337322
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41337322,
year = {2025},
author = {Zhao, R and Zhang, S and Bai, Y and Ni, G},
title = {Neural Dynamics in Imagined Speech: A Spatiotemporal Analysis Based on EEG Source Localization and Functional Connectivity.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-5},
doi = {10.1109/EMBC58623.2025.11254701},
pmid = {41337322},
issn = {2694-0604},
mesh = {Humans ; *Electroencephalography/methods ; *Speech/physiology ; Male ; *Imagination/physiology ; Brain-Computer Interfaces ; Female ; Adult ; Spatio-Temporal Analysis ; *Brain/physiology ; Brain Mapping/methods ; Young Adult ; },
abstract = {Communication is a crucial part of daily life. However, patients with speech disorders may have difficulty communicating with the outside world and, in severe cases, may even completely lose the ability to speak. Imagined speech is an intrinsic speech activity that does not explicitly move any vocal organs, which has emerged as a promising avenue for brain-computer interface (BCI) research. In this study, we developed a novel experimental paradigm tailored to imagined speech tasks based on Chinese characters and collected participants' high-temporal-resolution electroencephalogram (EEG) data. Using dynamic statistical parametric mapping (dSPM), we delineated the spatial distribution of neural activation, while functional connectivity was quantified through phase-locking value (PLV) analysis to capture the temporal interplay between distinct brain regions. We introduced a novel spatiotemporal feature representation, termed information flow (IF), by segmenting the imagined speech process into 10 continuous temporal windows, we systematically analyzed the evolution of global and local information flow dynamics. The results revealed distinct spatiotemporal patterns of neural activation and functional connectivity, underscoring the coordinated interaction of critical brain regions involved in the process of imagined speech, which help to elucidate the spatiotemporal dynamics of imagined speech and provide valuable insights into its underlying neural mechanisms. This work provides a foundation for advancing speech BCI applications and contributes to understanding the cognitive and neural bases of imagined speech in Chinese.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Electroencephalography/methods
*Speech/physiology
Male
*Imagination/physiology
Brain-Computer Interfaces
Female
Adult
Spatio-Temporal Analysis
*Brain/physiology
Brain Mapping/methods
Young Adult
RevDate: 2025-12-03
CmpDate: 2025-12-03
Foresee: A Modular and Open Framework to Explore Integrated Processing on Brain-Computer Interfaces.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
Brain-computer interfaces (BCIs) with processing integrated on the device enable fast and autonomous closed-loop interaction with the brain. While such BCIs are rapidly gaining traction, they are also difficult to design due to the tight and conflicting power and performance needs of on-device processing. Meeting these specifications often requires the BCI processors to be co-designed with applications and algorithms, with processor designers and computational neuroscientists working closely to converge on the target hardware platform. But, this process has traditionally been cumbersome and ad hoc, due to the lack of systematic design space exploration frameworks. In response, we present Foresee, a new framework for fast exploration of BCI processors. Foresee offers a unified and modular interface for iteratively co-optimizing BCI processors with their algorithms, without sacrificing accuracy, speed, or ease of use. Foresee is publicly available, and comes with a library of hardware blocks for common signal processing functions that the community could contribute and build on. We demonstrate Foresee's utility and capability by analyzing on-device processing for two seizure detection methods from prior work, and validating our analysis on real hardware. We expect Foresee to be vital in designing next-generation BCIs.
Additional Links: PMID-41337318
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41337318,
year = {2025},
author = {Yadav, A and Garcia, FC and Gonzalez, A and Trevisan, BE and Xu, A and Ugur, M and Bhattacharjee, A and Pothukuchi, RP},
title = {Foresee: A Modular and Open Framework to Explore Integrated Processing on Brain-Computer Interfaces.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11254710},
pmid = {41337318},
issn = {2694-0604},
mesh = {*Brain-Computer Interfaces ; Humans ; Algorithms ; *Signal Processing, Computer-Assisted ; Electroencephalography ; },
abstract = {Brain-computer interfaces (BCIs) with processing integrated on the device enable fast and autonomous closed-loop interaction with the brain. While such BCIs are rapidly gaining traction, they are also difficult to design due to the tight and conflicting power and performance needs of on-device processing. Meeting these specifications often requires the BCI processors to be co-designed with applications and algorithms, with processor designers and computational neuroscientists working closely to converge on the target hardware platform. But, this process has traditionally been cumbersome and ad hoc, due to the lack of systematic design space exploration frameworks. In response, we present Foresee, a new framework for fast exploration of BCI processors. Foresee offers a unified and modular interface for iteratively co-optimizing BCI processors with their algorithms, without sacrificing accuracy, speed, or ease of use. Foresee is publicly available, and comes with a library of hardware blocks for common signal processing functions that the community could contribute and build on. We demonstrate Foresee's utility and capability by analyzing on-device processing for two seizure detection methods from prior work, and validating our analysis on real hardware. We expect Foresee to be vital in designing next-generation BCIs.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Brain-Computer Interfaces
Humans
Algorithms
*Signal Processing, Computer-Assisted
Electroencephalography
RevDate: 2025-12-03
CmpDate: 2025-12-03
A Window Analysis for the Decoding of Premovement and Movement Intentions in Freewill EEG.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-4.
Decoding movement-related intentions from electroencephalogram (EEG) is important for developing real-time brain machine interfaces (BMIs). While most studies focus on cue-based tasks in EEG-based BMIs, freewill reaching and grasping tasks allow subjects to initiate movements of their own will, making them relevant to practical EEG-based BMIs. However, the investigation of EEG window size for decoding freewill movements remains unexplored. This study systematically analyzes the effect of different window sizes on decoding EEG premovement (prior to the movement onset) and movement (after movement initiation) intentions in freewill reaching and grasping tasks. We used 49 EEG recordings from 23 subjects, and EEG windows of 0.1-1s in 0.1s increments were analyzed within the range of -3 to 3s relative to the movement onset at 0. Decoding was performed using regularized linear support vector machine (LSVM) and regularized linear discriminant analysis (RLDA), and performance was evaluated in terms of accuracy. Larger window sizes consistently outperformed smaller ones, with peak accuracy occurring between 0-1s relative to the movement onset. LSVM outperformed RLDA across all 10 window sizes, with peak accuracy ranging from 86.98% with 0.1s window to 90.94% with 1s window. Using LSVM, the earliest peak accuracy (90.03%) was achieved with a 0.7s window starting at 0.35s after the movement onset. Notably, a 0.5s window provided a peak accuracy of 89.5% which is not statistically significant compared to the 0.7s window (p = 0.05). The start point of the 0.5s window was 0.5s after the onset. With LSVM, considering the trade-off between decoding accuracy and latency, the 0.5s window offers the optimal choice for decoding movement intention in freewill EEG.Clinical relevance- Identifying the optimal window size to decode movement-related intentions in freewill EEG can help improve strategies to develop real-time BMIs for individuals with motor impairments.
Additional Links: PMID-41337309
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41337309,
year = {2025},
author = {Thapa, BR and Bae, J},
title = {A Window Analysis for the Decoding of Premovement and Movement Intentions in Freewill EEG.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-4},
doi = {10.1109/EMBC58623.2025.11253481},
pmid = {41337309},
issn = {2694-0604},
mesh = {Humans ; *Electroencephalography/methods ; Movement/physiology ; Male ; Female ; Support Vector Machine ; *Brain-Computer Interfaces ; Adult ; *Intention ; Young Adult ; Signal Processing, Computer-Assisted ; },
abstract = {Decoding movement-related intentions from electroencephalogram (EEG) is important for developing real-time brain machine interfaces (BMIs). While most studies focus on cue-based tasks in EEG-based BMIs, freewill reaching and grasping tasks allow subjects to initiate movements of their own will, making them relevant to practical EEG-based BMIs. However, the investigation of EEG window size for decoding freewill movements remains unexplored. This study systematically analyzes the effect of different window sizes on decoding EEG premovement (prior to the movement onset) and movement (after movement initiation) intentions in freewill reaching and grasping tasks. We used 49 EEG recordings from 23 subjects, and EEG windows of 0.1-1s in 0.1s increments were analyzed within the range of -3 to 3s relative to the movement onset at 0. Decoding was performed using regularized linear support vector machine (LSVM) and regularized linear discriminant analysis (RLDA), and performance was evaluated in terms of accuracy. Larger window sizes consistently outperformed smaller ones, with peak accuracy occurring between 0-1s relative to the movement onset. LSVM outperformed RLDA across all 10 window sizes, with peak accuracy ranging from 86.98% with 0.1s window to 90.94% with 1s window. Using LSVM, the earliest peak accuracy (90.03%) was achieved with a 0.7s window starting at 0.35s after the movement onset. Notably, a 0.5s window provided a peak accuracy of 89.5% which is not statistically significant compared to the 0.7s window (p = 0.05). The start point of the 0.5s window was 0.5s after the onset. With LSVM, considering the trade-off between decoding accuracy and latency, the 0.5s window offers the optimal choice for decoding movement intention in freewill EEG.Clinical relevance- Identifying the optimal window size to decode movement-related intentions in freewill EEG can help improve strategies to develop real-time BMIs for individuals with motor impairments.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Electroencephalography/methods
Movement/physiology
Male
Female
Support Vector Machine
*Brain-Computer Interfaces
Adult
*Intention
Young Adult
Signal Processing, Computer-Assisted
RevDate: 2025-12-03
CmpDate: 2025-12-03
Classifying Awareness with a Lightweight CNN in an Olfactory Oddball Passive BCI.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-4.
Olfaction, or the sense of smell, presents a promising avenue for enhancing brain-computer interface (BCI) usability and enabling passive cognitive state monitoring. In reactive BCI paradigms, odor cues can be associated with specific commands, facilitating more intuitive interaction. Furthermore, passive BCI applications can leverage olfactory stimuli to monitor cognitive processes. Despite this potential, challenges remain, notably the requirement for precise odor delivery mechanisms and robust algorithms capable of detecting and interpreting associated brain activity. This work proposes a novel approach, combining electroencephalography (EEG) and electrobulbogram (EBG) within an olfactory modality oddball paradigm, for predicting user awareness levels. A pilot study is presented, demonstrating improved user awareness classification performance with a newly developed multiclass, lightweight convolutional neural network (CNN) for this passive olfactory BCI modality, surpassing previously reported results.Clinical relevance- This research demonstrates the feasibility of inferring user awareness levels from concurrently acquired electroencephalographic (EEG) and electrobulbogram (EBG) neurophysiological data.
Additional Links: PMID-41337275
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41337275,
year = {2025},
author = {Rutkowski, TM and Kasprzak, H and Otake-Matsuura, M and Komendzinski, T},
title = {Classifying Awareness with a Lightweight CNN in an Olfactory Oddball Passive BCI.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-4},
doi = {10.1109/EMBC58623.2025.11253457},
pmid = {41337275},
issn = {2694-0604},
mesh = {Humans ; *Brain-Computer Interfaces ; Electroencephalography ; *Neural Networks, Computer ; *Awareness/physiology ; *Smell/physiology ; Algorithms ; Male ; Signal Processing, Computer-Assisted ; Adult ; Female ; },
abstract = {Olfaction, or the sense of smell, presents a promising avenue for enhancing brain-computer interface (BCI) usability and enabling passive cognitive state monitoring. In reactive BCI paradigms, odor cues can be associated with specific commands, facilitating more intuitive interaction. Furthermore, passive BCI applications can leverage olfactory stimuli to monitor cognitive processes. Despite this potential, challenges remain, notably the requirement for precise odor delivery mechanisms and robust algorithms capable of detecting and interpreting associated brain activity. This work proposes a novel approach, combining electroencephalography (EEG) and electrobulbogram (EBG) within an olfactory modality oddball paradigm, for predicting user awareness levels. A pilot study is presented, demonstrating improved user awareness classification performance with a newly developed multiclass, lightweight convolutional neural network (CNN) for this passive olfactory BCI modality, surpassing previously reported results.Clinical relevance- This research demonstrates the feasibility of inferring user awareness levels from concurrently acquired electroencephalographic (EEG) and electrobulbogram (EBG) neurophysiological data.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Brain-Computer Interfaces
Electroencephalography
*Neural Networks, Computer
*Awareness/physiology
*Smell/physiology
Algorithms
Male
Signal Processing, Computer-Assisted
Adult
Female
RevDate: 2025-12-03
CmpDate: 2025-12-03
A Proof-of-Concept Spike Based Neuromorphic Brain-Computer Interface.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
Closed-loop brain-computer interfaces (BCIs) hold promise for restoring function after neurological damage by dynamically processing neural signals and delivering targeted brain stimulation. To achieve clinically meaningful outcomes, such systems must operate with high spatiotemporal precision. This work aims to demonstrate a proof-of-concept neuromorphic BCI that processes neural spike events in near-real time, without necessitating preprocessing besides signal filtering and spike detection. Methods - We developed a system that acquires neural signals and streams spike events into a spiking neural network (SNN) running on SpiNNaker neuromorphic hardware. We evaluated the system's performance using both in vivo recordings from mouse visual cortex and simulated neural waveforms. We measured the roundtrip latency, defined as the time from spike detection to an output spike generated by the SNN. Results - Under baseline conditions with no hidden SNN layers, mean roundtrip latency was 4.69 ms (±1.70 ms). Adding hidden layers increased latency by approximately 3.65 ms per layer, reflecting the computational overhead of deeper networks. The system successfully detected and processed spikes in near real-time, demonstrating that neuromorphic hardware can manage spike-based input at speeds suitable for closed-loop intervention. Discussion - These findings indicate that neuromorphic SNNs can rapidly process neural signals, providing a foundation for closed-loop BCIs capable of bypassing damaged neural pathways. Future efforts will involve implementing stimulation protocols and functional SNNs. Such developments may ultimately facilitate more effective, flexible, and power-efficient neuroprosthetic devices.
Additional Links: PMID-41337269
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41337269,
year = {2025},
author = {Dijkema, EB and Pennartz, CMA and Olcese, U},
title = {A Proof-of-Concept Spike Based Neuromorphic Brain-Computer Interface.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11253485},
pmid = {41337269},
issn = {2694-0604},
mesh = {*Brain-Computer Interfaces ; Animals ; Mice ; Signal Processing, Computer-Assisted ; *Neural Networks, Computer ; *Action Potentials/physiology ; Visual Cortex/physiology ; },
abstract = {Closed-loop brain-computer interfaces (BCIs) hold promise for restoring function after neurological damage by dynamically processing neural signals and delivering targeted brain stimulation. To achieve clinically meaningful outcomes, such systems must operate with high spatiotemporal precision. This work aims to demonstrate a proof-of-concept neuromorphic BCI that processes neural spike events in near-real time, without necessitating preprocessing besides signal filtering and spike detection. Methods - We developed a system that acquires neural signals and streams spike events into a spiking neural network (SNN) running on SpiNNaker neuromorphic hardware. We evaluated the system's performance using both in vivo recordings from mouse visual cortex and simulated neural waveforms. We measured the roundtrip latency, defined as the time from spike detection to an output spike generated by the SNN. Results - Under baseline conditions with no hidden SNN layers, mean roundtrip latency was 4.69 ms (±1.70 ms). Adding hidden layers increased latency by approximately 3.65 ms per layer, reflecting the computational overhead of deeper networks. The system successfully detected and processed spikes in near real-time, demonstrating that neuromorphic hardware can manage spike-based input at speeds suitable for closed-loop intervention. Discussion - These findings indicate that neuromorphic SNNs can rapidly process neural signals, providing a foundation for closed-loop BCIs capable of bypassing damaged neural pathways. Future efforts will involve implementing stimulation protocols and functional SNNs. Such developments may ultimately facilitate more effective, flexible, and power-efficient neuroprosthetic devices.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Brain-Computer Interfaces
Animals
Mice
Signal Processing, Computer-Assisted
*Neural Networks, Computer
*Action Potentials/physiology
Visual Cortex/physiology
RevDate: 2025-12-03
CmpDate: 2025-12-03
Shielded Relay Coil design to Optimize WPT and SAR for Distributed Wireless Brain Implants.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-4.
This paper presents a shielded relay antenna to simultaneously enhance Wireless Power Transfer (WPT) and reduce Specific Absorption Rate (SAR) for a network of distributed brain microimplants. Through strategic placement of conductive features, Eddy currents are created to oppose high magnetic fields. This design advantageously equalizes and increases the field strength over the cortical surface area. This work has the potential to address the WPT/ SAR co-optimization challenges for biomedical implants in general. When applied to the target 2 × 2 cm[2] wireless brain-machine interface (BMI) system operating at 915 MHz, HFSS simulations show it provides 1.2 dB WPT enhancement and a 29% SAR reduction.
Additional Links: PMID-41337259
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41337259,
year = {2025},
author = {Daling, MH and Alonzo, J and Lee, J and Lee, AH and Durfee, D and Larson, L and Nurmikko, A and Leung, VW},
title = {Shielded Relay Coil design to Optimize WPT and SAR for Distributed Wireless Brain Implants.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-4},
doi = {10.1109/EMBC58623.2025.11253961},
pmid = {41337259},
issn = {2694-0604},
mesh = {*Wireless Technology/instrumentation ; Humans ; *Brain-Computer Interfaces ; Equipment Design ; *Brain/physiology ; *Prostheses and Implants ; },
abstract = {This paper presents a shielded relay antenna to simultaneously enhance Wireless Power Transfer (WPT) and reduce Specific Absorption Rate (SAR) for a network of distributed brain microimplants. Through strategic placement of conductive features, Eddy currents are created to oppose high magnetic fields. This design advantageously equalizes and increases the field strength over the cortical surface area. This work has the potential to address the WPT/ SAR co-optimization challenges for biomedical implants in general. When applied to the target 2 × 2 cm[2] wireless brain-machine interface (BMI) system operating at 915 MHz, HFSS simulations show it provides 1.2 dB WPT enhancement and a 29% SAR reduction.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Wireless Technology/instrumentation
Humans
*Brain-Computer Interfaces
Equipment Design
*Brain/physiology
*Prostheses and Implants
RevDate: 2025-12-03
CmpDate: 2025-12-03
Wireless Communication Protocol for backscatter-based Neural Implants.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
This work presents a novel protocol for bidirectional wireless communication with neural implants that contributes to the growing field of closed-loop brain-computer interfaces (BCIs). BCIs are an emerging technology for studying and treating neurological disorders, such as spinal cord injuries. Furthermore, BCI heavily rely on neural implants as a crucial element, because they hold the potential to restore functionality of paralyzed limbs. The proposed protocol presents an open configuration to enable neural implants to communicate wirelessly with an external reader. Because computation to extract movement intention is performed externally, computing power is nearly unlimited and the energy consumption of the implant is reduced drastically. To validate the proposed protocol, the downlink (reader to implant) was implemented on a software defined radio running GNU-Radio toolkit with custom communication blocks. The uplink (implant to reader) was implemented on an FPGA. Finally, to validate the movement intention decoding, pre-recorded neural data was backscattered from an FPGA-based implant and the decoding was executed successfully.
Additional Links: PMID-41337212
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41337212,
year = {2025},
author = {Arjona, L and Rosenthal, J and Azkarate, M},
title = {Wireless Communication Protocol for backscatter-based Neural Implants.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11253936},
pmid = {41337212},
issn = {2694-0604},
mesh = {*Wireless Technology/instrumentation ; Humans ; *Brain-Computer Interfaces ; *Prostheses and Implants ; },
abstract = {This work presents a novel protocol for bidirectional wireless communication with neural implants that contributes to the growing field of closed-loop brain-computer interfaces (BCIs). BCIs are an emerging technology for studying and treating neurological disorders, such as spinal cord injuries. Furthermore, BCI heavily rely on neural implants as a crucial element, because they hold the potential to restore functionality of paralyzed limbs. The proposed protocol presents an open configuration to enable neural implants to communicate wirelessly with an external reader. Because computation to extract movement intention is performed externally, computing power is nearly unlimited and the energy consumption of the implant is reduced drastically. To validate the proposed protocol, the downlink (reader to implant) was implemented on a software defined radio running GNU-Radio toolkit with custom communication blocks. The uplink (implant to reader) was implemented on an FPGA. Finally, to validate the movement intention decoding, pre-recorded neural data was backscattered from an FPGA-based implant and the decoding was executed successfully.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Wireless Technology/instrumentation
Humans
*Brain-Computer Interfaces
*Prostheses and Implants
RevDate: 2025-12-03
CmpDate: 2025-12-03
Modification of cortical activation pattern after long-term BCI training and its impact on decoding model performances.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
In brain-computer-interfaces (BCIs) variability usually appears in brain signals from one session to another. This inter-session-variability is of major importance for two reasons. On the one hand it poses an issue for a model learned on previous session, that does not always perform correctly on new sessions. On the other hand, it can also be a marker of long-term adaptation in the brain of patients, which may reflect learning or even rehabilitation. This study investigates the phenomenon of physiological drift in BCIs, focusing on the evolution of brain activity over sessions. In order to do so, we analyzed the spatial patterns of synchronization and desynchronization in a wide range of frequencies. A linear regression model was proposed to quantify drift and residual variability. In this article, we study the inter-session variability both physiologically and from the point of view of the decoder performance and compute the correlation between them to examine their coherence. This study provides valuable insights on the physiological drift and its impact on BCI performance, contributing to the development of more stable and reliable BCI systems for rehabilitation medicine.(p)(p)Clinical Relevance-The long-term modifications in the activation patterns after BCI training studied in this article is an additional evidence of potential for rehabilitation using BCI.
Additional Links: PMID-41337189
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41337189,
year = {2025},
author = {Bleuze, A and Martel, F and Aksenova, T and Struber, L},
title = {Modification of cortical activation pattern after long-term BCI training and its impact on decoding model performances.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11253801},
pmid = {41337189},
issn = {2694-0604},
mesh = {*Brain-Computer Interfaces ; Humans ; *Electroencephalography/methods ; *Models, Neurological ; Male ; *Cerebral Cortex/physiology ; },
abstract = {In brain-computer-interfaces (BCIs) variability usually appears in brain signals from one session to another. This inter-session-variability is of major importance for two reasons. On the one hand it poses an issue for a model learned on previous session, that does not always perform correctly on new sessions. On the other hand, it can also be a marker of long-term adaptation in the brain of patients, which may reflect learning or even rehabilitation. This study investigates the phenomenon of physiological drift in BCIs, focusing on the evolution of brain activity over sessions. In order to do so, we analyzed the spatial patterns of synchronization and desynchronization in a wide range of frequencies. A linear regression model was proposed to quantify drift and residual variability. In this article, we study the inter-session variability both physiologically and from the point of view of the decoder performance and compute the correlation between them to examine their coherence. This study provides valuable insights on the physiological drift and its impact on BCI performance, contributing to the development of more stable and reliable BCI systems for rehabilitation medicine.(p)(p)Clinical Relevance-The long-term modifications in the activation patterns after BCI training studied in this article is an additional evidence of potential for rehabilitation using BCI.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Brain-Computer Interfaces
Humans
*Electroencephalography/methods
*Models, Neurological
Male
*Cerebral Cortex/physiology
RevDate: 2025-12-03
CmpDate: 2025-12-03
EIMNet: An EEG and iEEG-Fused Interactive Modality Network for Accurate Memory State Prediction during Working Memory Task.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-6.
Recent advancements in Brain-Computer Interface (BCI) research have increasingly highlighted the significance of multimodal integration for effectively extracting task-discriminative features. In the context of working memory (WM) task, we introduce EIMNet, a cross-modality fusion model inspired by the phase-amplitude coupling phenomenon. By enabling interaction between electroencephalography (EEG) and intracranial electroencephalography (iEEG), EIMNet enhances the representation of task-related features, improving the prediction of memory-related effects. Our ablation experiments demonstrate that EIMNet enhances decoding performance, with factors such as interaction factor selection, frequency band splitting, and data augmentation playing vital roles. We demonstrate the effectiveness of EIMNet in improving decoding accuracy by integrating EEG and iEEG for working memory task, with promising applications in memory and attention-related cognitive research.
Additional Links: PMID-41337178
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41337178,
year = {2025},
author = {Wang, M and Wang, J and Zhao, J and Yao, L and Wang, Y},
title = {EIMNet: An EEG and iEEG-Fused Interactive Modality Network for Accurate Memory State Prediction during Working Memory Task.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-6},
doi = {10.1109/EMBC58623.2025.11253846},
pmid = {41337178},
issn = {2694-0604},
mesh = {Humans ; *Memory, Short-Term/physiology ; *Electroencephalography/methods ; *Brain-Computer Interfaces ; Algorithms ; Signal Processing, Computer-Assisted ; Male ; },
abstract = {Recent advancements in Brain-Computer Interface (BCI) research have increasingly highlighted the significance of multimodal integration for effectively extracting task-discriminative features. In the context of working memory (WM) task, we introduce EIMNet, a cross-modality fusion model inspired by the phase-amplitude coupling phenomenon. By enabling interaction between electroencephalography (EEG) and intracranial electroencephalography (iEEG), EIMNet enhances the representation of task-related features, improving the prediction of memory-related effects. Our ablation experiments demonstrate that EIMNet enhances decoding performance, with factors such as interaction factor selection, frequency band splitting, and data augmentation playing vital roles. We demonstrate the effectiveness of EIMNet in improving decoding accuracy by integrating EEG and iEEG for working memory task, with promising applications in memory and attention-related cognitive research.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Memory, Short-Term/physiology
*Electroencephalography/methods
*Brain-Computer Interfaces
Algorithms
Signal Processing, Computer-Assisted
Male
RevDate: 2025-12-03
CmpDate: 2025-12-03
Enhancing EEG-Based Emotion Classification by Refining the Spatial Precision of Brain Activity.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-6.
Advancements in neuroscience and deep learning have significantly enhanced bio-signal-based emotion recognition, a critical component in Brain-Machine Interface (BMI) applications for healthcare, human-computer interaction, and human-AI assistant communication. Former studies have proposed Manual Mapping electrode matrices and employing Convolutional Neural Networks (CNNs) to recognize spatial EEG activities. However, this Manual Mapping of EEG electrodes onto matrix grids limits spatial precision and introduces inefficiencies. This study proposes automated channel mapping methods of Orthographic Projection and Stereographic Projection to address these challenges, using Differential Entropy and Power Spectral Density with Linear Dynamical Systems as features. A 3-branch multiscale CNN was trained on open-source dataset, employing a 5-fold cross-classification approach. Experimental results demonstrate that higher-resolution grids (16×16, 24×24) with automated projections significantly outperform Manual Mappings, achieving up to a 4.06% improvement in classification accuracy (p < 0.05). This result indicates that enhancing spatial precision of EEG data improves emotion classification, establishing automated spatial mapping as an advancement in EEG-based emotion recognition.Clinical Relevance-Advancement in emotion classification accuracy can facilitate more reliable diagnostic tools and personalized therapeutic interventions for mental health disorders, such as depression and anxiety.
Additional Links: PMID-41337165
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41337165,
year = {2025},
author = {Xu, Y and Otsuka, S and Nakagawa, S},
title = {Enhancing EEG-Based Emotion Classification by Refining the Spatial Precision of Brain Activity.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-6},
doi = {10.1109/EMBC58623.2025.11253823},
pmid = {41337165},
issn = {2694-0604},
mesh = {Humans ; *Electroencephalography/methods ; *Emotions/physiology/classification ; *Brain/physiology ; Signal Processing, Computer-Assisted ; Neural Networks, Computer ; Brain-Computer Interfaces ; Algorithms ; },
abstract = {Advancements in neuroscience and deep learning have significantly enhanced bio-signal-based emotion recognition, a critical component in Brain-Machine Interface (BMI) applications for healthcare, human-computer interaction, and human-AI assistant communication. Former studies have proposed Manual Mapping electrode matrices and employing Convolutional Neural Networks (CNNs) to recognize spatial EEG activities. However, this Manual Mapping of EEG electrodes onto matrix grids limits spatial precision and introduces inefficiencies. This study proposes automated channel mapping methods of Orthographic Projection and Stereographic Projection to address these challenges, using Differential Entropy and Power Spectral Density with Linear Dynamical Systems as features. A 3-branch multiscale CNN was trained on open-source dataset, employing a 5-fold cross-classification approach. Experimental results demonstrate that higher-resolution grids (16×16, 24×24) with automated projections significantly outperform Manual Mappings, achieving up to a 4.06% improvement in classification accuracy (p < 0.05). This result indicates that enhancing spatial precision of EEG data improves emotion classification, establishing automated spatial mapping as an advancement in EEG-based emotion recognition.Clinical Relevance-Advancement in emotion classification accuracy can facilitate more reliable diagnostic tools and personalized therapeutic interventions for mental health disorders, such as depression and anxiety.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Electroencephalography/methods
*Emotions/physiology/classification
*Brain/physiology
Signal Processing, Computer-Assisted
Neural Networks, Computer
Brain-Computer Interfaces
Algorithms
RevDate: 2025-12-03
CmpDate: 2025-12-03
Adaptively Pruned Spiking Neural Networks for Energy-Efficient Intracortical Neural Decoding.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
Intracortical brain-machine interfaces demand low-latency, energy-efficient solutions for neural decoding. Spiking Neural Networks (SNNs) deployed on neuromorphic hardware have demonstrated remarkable efficiency in neural decoding by leveraging sparse binary activations and efficient spatiotemporal processing. However, reducing the computational cost of SNNs remains a critical challenge for developing ultra-efficient intracortical neural implants. In this work, we introduce a novel adaptive pruning algorithm specifically designed for SNNs with high activation sparsity, targeting intracortical neural decoding. Our method dynamically adjusts pruning decisions and employs a rollback mechanism to selectively eliminate redundant synaptic connections without compromising decoding accuracy. Experimental evaluation on the NeuroBench Non-Human Primate (NHP) Motor Prediction benchmark shows that our pruned network achieves performance comparable to dense networks, with a maximum tenfold improvement in efficiency. Moreover, hardware simulation on the neuromorphic processor reveals that the pruned network operates at sub-μW power levels, underscoring its potential for energy-constrained neural implants. These results underscore the promise of our approach for advancing energy-efficient intracortical brain-machine interfaces with low-overhead on-device intelligence.
Additional Links: PMID-41337115
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41337115,
year = {2025},
author = {Rivelli, F and Popov, M and Kouzinopoulos, CS and Tang, G},
title = {Adaptively Pruned Spiking Neural Networks for Energy-Efficient Intracortical Neural Decoding.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11254088},
pmid = {41337115},
issn = {2694-0604},
mesh = {Animals ; *Brain-Computer Interfaces ; Algorithms ; *Neural Networks, Computer ; Humans ; *Neurons/physiology ; *Action Potentials/physiology ; *Nerve Net/physiology ; },
abstract = {Intracortical brain-machine interfaces demand low-latency, energy-efficient solutions for neural decoding. Spiking Neural Networks (SNNs) deployed on neuromorphic hardware have demonstrated remarkable efficiency in neural decoding by leveraging sparse binary activations and efficient spatiotemporal processing. However, reducing the computational cost of SNNs remains a critical challenge for developing ultra-efficient intracortical neural implants. In this work, we introduce a novel adaptive pruning algorithm specifically designed for SNNs with high activation sparsity, targeting intracortical neural decoding. Our method dynamically adjusts pruning decisions and employs a rollback mechanism to selectively eliminate redundant synaptic connections without compromising decoding accuracy. Experimental evaluation on the NeuroBench Non-Human Primate (NHP) Motor Prediction benchmark shows that our pruned network achieves performance comparable to dense networks, with a maximum tenfold improvement in efficiency. Moreover, hardware simulation on the neuromorphic processor reveals that the pruned network operates at sub-μW power levels, underscoring its potential for energy-constrained neural implants. These results underscore the promise of our approach for advancing energy-efficient intracortical brain-machine interfaces with low-overhead on-device intelligence.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Animals
*Brain-Computer Interfaces
Algorithms
*Neural Networks, Computer
Humans
*Neurons/physiology
*Action Potentials/physiology
*Nerve Net/physiology
RevDate: 2025-12-03
CmpDate: 2025-12-03
A Multi-Band Self-Attention Network for Motor Imagery Classification.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
Brain-computer interface (BCI) systems create a novel communication method between humans and machines by translating human thoughts into actionable commands to control external devices. Motor imagery (MI) electroencephalogram (EEG) signals have significant applicability in various medical and non-medical industries, including stroke rehabilitation, wheelchair control, and drone operation. However, the practical application of EEG remains limited by the decoding performance and generalization ability of MI signalsThis study introduces a multi-branch self-attention network for motor imagery (MI) signal classification. Each branch independently processes EEG signals decomposed into distinct frequency bands through convolutional neural networks (CNNs) and multi-head self-attention (MHA) mechanisms, enabling the extraction of both fundamental and discriminative spatial-temporal features. To further capture dynamic temporal dependencies, long short-term memory (LSTM) networks are integrated. We systematically evaluate three signal decomposition ensemble empirical mode decomposition (EEMD), wavelet packet decomposition (WPD), and brain rhythm-based decomposition-to optimize feature representation. Extensive experiments on the BCI Competition IV 2a dataset demonstrate state-of-the-art performance, with subject-dependent and subject-independent accuracies of 84.04% and 71.67%, respectively. Comparative analyses against benchmark models (EEGNet, EEGTCNet, ShallowConvNet, etc.) validate the superiority of our approach in classification accuracy and generalization capabilityClinical relevance- This study investigates the methods for decoding motor imagery EEG signals and establishes the positive role of each module in classification. The improvement in accuracy can lead to better outcomes in medical applications such as controlling prosthetics, wheelchairs, and stroke rehabilitation.
Additional Links: PMID-41337108
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41337108,
year = {2025},
author = {Song, Q and Kang, G},
title = {A Multi-Band Self-Attention Network for Motor Imagery Classification.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11254113},
pmid = {41337108},
issn = {2694-0604},
mesh = {Humans ; *Electroencephalography/methods ; *Brain-Computer Interfaces ; *Neural Networks, Computer ; Signal Processing, Computer-Assisted ; *Imagination/physiology ; Algorithms ; },
abstract = {Brain-computer interface (BCI) systems create a novel communication method between humans and machines by translating human thoughts into actionable commands to control external devices. Motor imagery (MI) electroencephalogram (EEG) signals have significant applicability in various medical and non-medical industries, including stroke rehabilitation, wheelchair control, and drone operation. However, the practical application of EEG remains limited by the decoding performance and generalization ability of MI signalsThis study introduces a multi-branch self-attention network for motor imagery (MI) signal classification. Each branch independently processes EEG signals decomposed into distinct frequency bands through convolutional neural networks (CNNs) and multi-head self-attention (MHA) mechanisms, enabling the extraction of both fundamental and discriminative spatial-temporal features. To further capture dynamic temporal dependencies, long short-term memory (LSTM) networks are integrated. We systematically evaluate three signal decomposition ensemble empirical mode decomposition (EEMD), wavelet packet decomposition (WPD), and brain rhythm-based decomposition-to optimize feature representation. Extensive experiments on the BCI Competition IV 2a dataset demonstrate state-of-the-art performance, with subject-dependent and subject-independent accuracies of 84.04% and 71.67%, respectively. Comparative analyses against benchmark models (EEGNet, EEGTCNet, ShallowConvNet, etc.) validate the superiority of our approach in classification accuracy and generalization capabilityClinical relevance- This study investigates the methods for decoding motor imagery EEG signals and establishes the positive role of each module in classification. The improvement in accuracy can lead to better outcomes in medical applications such as controlling prosthetics, wheelchairs, and stroke rehabilitation.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Electroencephalography/methods
*Brain-Computer Interfaces
*Neural Networks, Computer
Signal Processing, Computer-Assisted
*Imagination/physiology
Algorithms
RevDate: 2025-12-03
CmpDate: 2025-12-03
Motor-Sensory Coupled Learning for Motor Imagery Decoding.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-5.
Brain-Computer Interface (BCI) technology has significant potential for advancing stroke rehabilitation by promoting motor recovery by decoding motor intentions from electroencephalogram (EEG) signals. However, the practical application of BCI in rehabilitation faces several challenges, particularly in decoding accuracy. This limitation often stems from an overemphasis on motor imagery signals, while sensory components, which are crucial for effective motor function recovery, are frequently overlooked. In this paper, we propose a novel framework to enhance BCI performance by integrating both sensory and motor modalities through a motor-sensory coupled learning approach. The model leverages EEG data induced by both motor imagery (MI) and tactile sensation (TS), using adversarial training to capture the coupled features of these two domains. By incorporating reliable sensory signals, the proposed approach aims to improve the robustness and accuracy of motor imagery decoding, offering particular benefits for stroke patients with impaired motor rhythms. Experimental results from BCI-naive subjects show a significant improvement in classification accuracy compared to traditional motor imagery-only models, suggesting that this approach holds promise as a potential solution for stroke rehabilitation. These findings indicate that integrating sensory signals into BCI systems could lead to more effective rehabilitation strategies, paving the way for the development of more robust and adaptive BCI technologies in the future.
Additional Links: PMID-41337106
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41337106,
year = {2025},
author = {Zhong, Y and Wen, H and Assam, M and Yao, L and Wang, Y},
title = {Motor-Sensory Coupled Learning for Motor Imagery Decoding.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-5},
doi = {10.1109/EMBC58623.2025.11254055},
pmid = {41337106},
issn = {2694-0604},
mesh = {Humans ; *Brain-Computer Interfaces ; Electroencephalography ; *Imagination/physiology ; Stroke Rehabilitation ; Signal Processing, Computer-Assisted ; *Learning ; Male ; },
abstract = {Brain-Computer Interface (BCI) technology has significant potential for advancing stroke rehabilitation by promoting motor recovery by decoding motor intentions from electroencephalogram (EEG) signals. However, the practical application of BCI in rehabilitation faces several challenges, particularly in decoding accuracy. This limitation often stems from an overemphasis on motor imagery signals, while sensory components, which are crucial for effective motor function recovery, are frequently overlooked. In this paper, we propose a novel framework to enhance BCI performance by integrating both sensory and motor modalities through a motor-sensory coupled learning approach. The model leverages EEG data induced by both motor imagery (MI) and tactile sensation (TS), using adversarial training to capture the coupled features of these two domains. By incorporating reliable sensory signals, the proposed approach aims to improve the robustness and accuracy of motor imagery decoding, offering particular benefits for stroke patients with impaired motor rhythms. Experimental results from BCI-naive subjects show a significant improvement in classification accuracy compared to traditional motor imagery-only models, suggesting that this approach holds promise as a potential solution for stroke rehabilitation. These findings indicate that integrating sensory signals into BCI systems could lead to more effective rehabilitation strategies, paving the way for the development of more robust and adaptive BCI technologies in the future.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Brain-Computer Interfaces
Electroencephalography
*Imagination/physiology
Stroke Rehabilitation
Signal Processing, Computer-Assisted
*Learning
Male
RevDate: 2025-12-03
CmpDate: 2025-12-03
Inhibitory Effects of Individualized Transcranial Alternating Current Stimulation on Motor Imagery and Interhemispheric Symmetry: Implications for Stroke Rehabilitation.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-4.
Transcranial alternating current stimulation (tACS) holds potential in stroke rehabilitation, but its effects when delivered at an individual's peak motor imagery (MI) frequency remain unclear. This study investigated the impact of tACS delivered at subject-specific peak MI frequencies on MI performance accuracy, quantified in terms of classification accuracy, and interhemispheric symmetry, measured via the brain symmetry index (BSI). Using a brain-computer-brain closed-loop system, each subject's peak MI performance frequency was first identified during the Pre-stimulation phase, after which tACS was delivered at this determined frequency. Our findings show that active individualized tACS decreased MI performance and increased BSI, suggesting inhibitory effects on motor-related neural processes.Clinical Relevance- The observed inhibitory effects of tACS highlight its potential for targeted neuromodulation in stroke recovery. Future research should explore how inhibitory effects can be harnessed therapeutically and investigate stimulation parameters that could optimize outcomes for functional recovery. The demonstrated ability of tACS to modulate brain activity, evidenced by increased BSI, underscores its promise as a neuromodulatory tool in clinical applications.
Additional Links: PMID-41337085
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41337085,
year = {2025},
author = {Ong, JX and Premchand, B and Lim, RY and Chew, E and Jiang, M and Tang, N and Ang, KK},
title = {Inhibitory Effects of Individualized Transcranial Alternating Current Stimulation on Motor Imagery and Interhemispheric Symmetry: Implications for Stroke Rehabilitation.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-4},
doi = {10.1109/EMBC58623.2025.11254631},
pmid = {41337085},
issn = {2694-0604},
mesh = {Humans ; *Stroke Rehabilitation ; *Transcranial Direct Current Stimulation/methods ; Male ; Female ; Stroke/physiopathology ; *Imagination/physiology ; Adult ; Brain-Computer Interfaces ; },
abstract = {Transcranial alternating current stimulation (tACS) holds potential in stroke rehabilitation, but its effects when delivered at an individual's peak motor imagery (MI) frequency remain unclear. This study investigated the impact of tACS delivered at subject-specific peak MI frequencies on MI performance accuracy, quantified in terms of classification accuracy, and interhemispheric symmetry, measured via the brain symmetry index (BSI). Using a brain-computer-brain closed-loop system, each subject's peak MI performance frequency was first identified during the Pre-stimulation phase, after which tACS was delivered at this determined frequency. Our findings show that active individualized tACS decreased MI performance and increased BSI, suggesting inhibitory effects on motor-related neural processes.Clinical Relevance- The observed inhibitory effects of tACS highlight its potential for targeted neuromodulation in stroke recovery. Future research should explore how inhibitory effects can be harnessed therapeutically and investigate stimulation parameters that could optimize outcomes for functional recovery. The demonstrated ability of tACS to modulate brain activity, evidenced by increased BSI, underscores its promise as a neuromodulatory tool in clinical applications.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Stroke Rehabilitation
*Transcranial Direct Current Stimulation/methods
Male
Female
Stroke/physiopathology
*Imagination/physiology
Adult
Brain-Computer Interfaces
RevDate: 2025-12-03
CmpDate: 2025-12-03
Decoding of Individual Fingers Attempted Movement from Epidural ECoG in a Patient with Tetraplegia.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
Brain-Computer interfaces (BCIs) enable direct communication between the brain and external devices. This technology holds significant potential for restoring motor function in individuals with severe neurological impairments. Among others, restoration of fine hand motor functions allowing grasping and objects manipulation is a priority for enhancing patients' lifestyle. Decoding finger movements is crucial for the precise control of hand neuroprosthetics. In this article, we analyzed neural activity of a tetraplegic patient implanted with two WIMAGINE ECoG recording devices in front of the sensorimotor cortex of both hemispheres. ECoG was recorded over three sessions while the patient attempted to move individual fingers on the right hand. The attempted finger movements was decoded using a Hidden Markov Model, integrating Recursive Sample Weighted - N-Ways Partial Least Square algorithm addressing class imbalance. In the offline study, we obtained balanced accuracy 0.6603 ± 0.0087 in average for decoding activation of five individual fingers. Our results shows that decoding individual fingers movements attempts is possible in ECoG, paving the way for fine movement restoration using BCI.Clinical Relevance- Efficient decoding of individual fingers attempted movements using chronic ECoG recording devices in a tetraplegic patient, suggesting the feasibility of hand neuroprosthesis aimed at fine hand motor restoration in impaired individuals.
Additional Links: PMID-41337074
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41337074,
year = {2025},
author = {Carvallo, A and Struber, L and Costecalde, T and Souriau, R and Charvet, G and Aksenova, T},
title = {Decoding of Individual Fingers Attempted Movement from Epidural ECoG in a Patient with Tetraplegia.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11254592},
pmid = {41337074},
issn = {2694-0604},
mesh = {Humans ; *Quadriplegia/physiopathology ; *Brain-Computer Interfaces ; *Fingers/physiopathology/physiology ; Movement/physiology ; Algorithms ; *Electrocorticography/methods ; Male ; },
abstract = {Brain-Computer interfaces (BCIs) enable direct communication between the brain and external devices. This technology holds significant potential for restoring motor function in individuals with severe neurological impairments. Among others, restoration of fine hand motor functions allowing grasping and objects manipulation is a priority for enhancing patients' lifestyle. Decoding finger movements is crucial for the precise control of hand neuroprosthetics. In this article, we analyzed neural activity of a tetraplegic patient implanted with two WIMAGINE ECoG recording devices in front of the sensorimotor cortex of both hemispheres. ECoG was recorded over three sessions while the patient attempted to move individual fingers on the right hand. The attempted finger movements was decoded using a Hidden Markov Model, integrating Recursive Sample Weighted - N-Ways Partial Least Square algorithm addressing class imbalance. In the offline study, we obtained balanced accuracy 0.6603 ± 0.0087 in average for decoding activation of five individual fingers. Our results shows that decoding individual fingers movements attempts is possible in ECoG, paving the way for fine movement restoration using BCI.Clinical Relevance- Efficient decoding of individual fingers attempted movements using chronic ECoG recording devices in a tetraplegic patient, suggesting the feasibility of hand neuroprosthesis aimed at fine hand motor restoration in impaired individuals.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Quadriplegia/physiopathology
*Brain-Computer Interfaces
*Fingers/physiopathology/physiology
Movement/physiology
Algorithms
*Electrocorticography/methods
Male
RevDate: 2025-12-03
CmpDate: 2025-12-03
Identifying the Nature of Grip Force Signals in EEG & fNIRS with Multi-Modal Graph Fusion Network.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
Brain-Computer interfaces can assist motor rehabilitation for people with severe paralysis by directly decoding their brain signals into movement intention and executing with external devices without passing the impaired neural pathways. It is crucial to restore natural and smooth daily movements, and continuous force control is one of the most important kinaesthetic functions. However, the complex continuous force decoding and limited relevant public datasets greatly challenge this field. How the brain coordinates the motor command or sensory feedback during the force control behaviour also remains to be discussed. This work investigated these questions through a novel experimental setup by isolating the motor intention and sensory feedback and combining both components flexibly for hand grip. We applied functional electrical stimulation to induce passive gripping and collected grip force with multi-modal brain signals. Significant neural pattern differences were found in EEG time-frequency representation by comparing the brain responses under different task conditions, including voluntary movement, motor imagery, and passive perception status. Additionally, we present a multi-modal graph fusion model fusing both EEG and fNIRS for continuous bimanual grip force decoding. These contributions are beneficial to developing neural interfaces for rehabilitation and assistive devices that involve force manipulation or operate in isometric schemes.
Additional Links: PMID-41337062
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41337062,
year = {2025},
author = {Zhu, Z and Han, J and Zhang, Z and Wannawas, N and Faisal, AA},
title = {Identifying the Nature of Grip Force Signals in EEG & fNIRS with Multi-Modal Graph Fusion Network.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11254624},
pmid = {41337062},
issn = {2694-0604},
mesh = {Humans ; *Hand Strength/physiology ; *Electroencephalography/methods ; Brain-Computer Interfaces ; Spectroscopy, Near-Infrared/methods ; Male ; Adult ; Signal Processing, Computer-Assisted ; Female ; },
abstract = {Brain-Computer interfaces can assist motor rehabilitation for people with severe paralysis by directly decoding their brain signals into movement intention and executing with external devices without passing the impaired neural pathways. It is crucial to restore natural and smooth daily movements, and continuous force control is one of the most important kinaesthetic functions. However, the complex continuous force decoding and limited relevant public datasets greatly challenge this field. How the brain coordinates the motor command or sensory feedback during the force control behaviour also remains to be discussed. This work investigated these questions through a novel experimental setup by isolating the motor intention and sensory feedback and combining both components flexibly for hand grip. We applied functional electrical stimulation to induce passive gripping and collected grip force with multi-modal brain signals. Significant neural pattern differences were found in EEG time-frequency representation by comparing the brain responses under different task conditions, including voluntary movement, motor imagery, and passive perception status. Additionally, we present a multi-modal graph fusion model fusing both EEG and fNIRS for continuous bimanual grip force decoding. These contributions are beneficial to developing neural interfaces for rehabilitation and assistive devices that involve force manipulation or operate in isometric schemes.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Hand Strength/physiology
*Electroencephalography/methods
Brain-Computer Interfaces
Spectroscopy, Near-Infrared/methods
Male
Adult
Signal Processing, Computer-Assisted
Female
RevDate: 2025-12-03
CmpDate: 2025-12-03
Multipolar Hybrid Stimulation for Visual Prostheses: Enhancing Resolution and Specificity.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
Advancements in neural stimulation techniques are essential for improving the precision and efficiency of brain-machine interfaces, particularly in visual cortical prostheses. These prostheses aim to restore vision by stimulating the visual cortex, but current methods face challenges such as limited spatial resolution, high power consumption, and non-specific activation. This work proposes a multipolar hybrid stimulation approach that combines electrical and optical neuromodulation to mitigate these limitations. Unlike traditional monopolar and bipolar methods, which require numerous electrodes or suffer from crosstalk and timing issues, the proposed system employs polarity switching and selective electrode control, enabling customizable electric fields alongside optogenetics for precise neural targeting and enhanced resolution. By utilizing subthreshold electrical and optogenetic stimulation, this approach improves spatial selectivity, minimizes crosstalk, and reduces power consumption. The conceptual design for neural tissue stimulation is presented, with ongoing efforts focused on integrating this system into a microelectronic chip. By addressing key limitations in current prosthetic systems, this work contributes to the development of more efficient and scalable solutions for visual restoration.
Additional Links: PMID-41337056
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41337056,
year = {2025},
author = {Abdo, EA and Yakovlev, A and Degenaar, P},
title = {Multipolar Hybrid Stimulation for Visual Prostheses: Enhancing Resolution and Specificity.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11254606},
pmid = {41337056},
issn = {2694-0604},
mesh = {*Visual Prosthesis ; Humans ; *Electric Stimulation/methods ; Optogenetics ; Brain-Computer Interfaces ; *Visual Cortex/physiology ; Animals ; },
abstract = {Advancements in neural stimulation techniques are essential for improving the precision and efficiency of brain-machine interfaces, particularly in visual cortical prostheses. These prostheses aim to restore vision by stimulating the visual cortex, but current methods face challenges such as limited spatial resolution, high power consumption, and non-specific activation. This work proposes a multipolar hybrid stimulation approach that combines electrical and optical neuromodulation to mitigate these limitations. Unlike traditional monopolar and bipolar methods, which require numerous electrodes or suffer from crosstalk and timing issues, the proposed system employs polarity switching and selective electrode control, enabling customizable electric fields alongside optogenetics for precise neural targeting and enhanced resolution. By utilizing subthreshold electrical and optogenetic stimulation, this approach improves spatial selectivity, minimizes crosstalk, and reduces power consumption. The conceptual design for neural tissue stimulation is presented, with ongoing efforts focused on integrating this system into a microelectronic chip. By addressing key limitations in current prosthetic systems, this work contributes to the development of more efficient and scalable solutions for visual restoration.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Visual Prosthesis
Humans
*Electric Stimulation/methods
Optogenetics
Brain-Computer Interfaces
*Visual Cortex/physiology
Animals
RevDate: 2025-12-03
CmpDate: 2025-12-03
A Neuromorphic Approach for Brain-Machine Interface Using Spiking Neural Networks.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-4.
Brain-machine interfaces (BMIs) have emerged as a promising technology for restoring motor function in paralyzed individuals through direct neural control of prosthetic devices. While conventional decoding algorithms have achieved considerable success, they often overlook the fundamental biological properties of neural information processing. This paper presents a novel approach using Spiking Neural Networks (SNNs), a neuromorphic computing paradigm that closely mimics biological neural dynamics through event-driven processing and spike-timing-dependent plasticity. A SNN-based decoder was implemented for offline decoding of intracortical neural recordings from the primary motor cortex (M1) and dorsal premotor cortex (PMd) to continuous 2D cursor movements in a macaque monkey. This approach leverages the temporal processing capabilities of SNNs to capture the complex, time-varying nature of neural representations, potentially enabling more naturalistic and adaptive BMI control.
Additional Links: PMID-41337025
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41337025,
year = {2025},
author = {Liu, G and Yan, Y and He, S and Cai, J and Cheok, AD and Qi Wu, E and Song, A},
title = {A Neuromorphic Approach for Brain-Machine Interface Using Spiking Neural Networks.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-4},
doi = {10.1109/EMBC58623.2025.11254255},
pmid = {41337025},
issn = {2694-0604},
mesh = {*Brain-Computer Interfaces ; Animals ; *Neural Networks, Computer ; Algorithms ; *Action Potentials/physiology ; Motor Cortex/physiology ; Macaca mulatta ; Humans ; },
abstract = {Brain-machine interfaces (BMIs) have emerged as a promising technology for restoring motor function in paralyzed individuals through direct neural control of prosthetic devices. While conventional decoding algorithms have achieved considerable success, they often overlook the fundamental biological properties of neural information processing. This paper presents a novel approach using Spiking Neural Networks (SNNs), a neuromorphic computing paradigm that closely mimics biological neural dynamics through event-driven processing and spike-timing-dependent plasticity. A SNN-based decoder was implemented for offline decoding of intracortical neural recordings from the primary motor cortex (M1) and dorsal premotor cortex (PMd) to continuous 2D cursor movements in a macaque monkey. This approach leverages the temporal processing capabilities of SNNs to capture the complex, time-varying nature of neural representations, potentially enabling more naturalistic and adaptive BMI control.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Brain-Computer Interfaces
Animals
*Neural Networks, Computer
Algorithms
*Action Potentials/physiology
Motor Cortex/physiology
Macaca mulatta
Humans
RevDate: 2025-12-03
CmpDate: 2025-12-03
Dual-layer hand gestures decoding with wireless epidural braincomputer interface in a tetraplegia.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-6.
Spinal cord injury disrupts the neural connections between the brain and limbs, resulting in tetraplegia. Brain-computer interface (BCI) hold promise for enabling voluntary limb movements in tetraplegic patients, yet achieving fine motor control of the hand remains a challenge. Invasive BCI based on intracortical electrode arrays have demonstrated real-time multi-gesture decoding. However, their long-term safety is a major barrier in clinical applications. In this study, a tetraplegic patient was implanted with our recently developed wireless minimally invasive BCI, which records epidural field potential from eight electrodes over the sensorimotor cortex to decode continuous hand movement intentions. Natural hand movements can be decomposed into dual layers: the high level movement states and the low level finger kinematics. Accordingly, we propose a dual-layer decoding algorithm for multi-gesture BCI decoding. The upper layer infers the movement state using a hidden Markov model, while the lower layer decodes finger motion variables through a mixture of experts and filters them with a state specific linear system. This approach enables the real-time decoding of six hand gestures, outperforming classical decoders and recurrent neural networks. The proposed dual-layer framework achieves multi-gesture decoding solely from epidural EEG signals, paving the way for the development of flexible and robust BCI control of hand movement.
Additional Links: PMID-41337011
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41337011,
year = {2025},
author = {Yao, R and Du, Z and Liang, F and Li, W and Hong, B},
title = {Dual-layer hand gestures decoding with wireless epidural braincomputer interface in a tetraplegia.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-6},
doi = {10.1109/EMBC58623.2025.11254206},
pmid = {41337011},
issn = {2694-0604},
mesh = {Humans ; *Brain-Computer Interfaces ; *Quadriplegia/physiopathology ; *Gestures ; *Hand/physiopathology/physiology ; *Wireless Technology ; Algorithms ; Electroencephalography ; },
abstract = {Spinal cord injury disrupts the neural connections between the brain and limbs, resulting in tetraplegia. Brain-computer interface (BCI) hold promise for enabling voluntary limb movements in tetraplegic patients, yet achieving fine motor control of the hand remains a challenge. Invasive BCI based on intracortical electrode arrays have demonstrated real-time multi-gesture decoding. However, their long-term safety is a major barrier in clinical applications. In this study, a tetraplegic patient was implanted with our recently developed wireless minimally invasive BCI, which records epidural field potential from eight electrodes over the sensorimotor cortex to decode continuous hand movement intentions. Natural hand movements can be decomposed into dual layers: the high level movement states and the low level finger kinematics. Accordingly, we propose a dual-layer decoding algorithm for multi-gesture BCI decoding. The upper layer infers the movement state using a hidden Markov model, while the lower layer decodes finger motion variables through a mixture of experts and filters them with a state specific linear system. This approach enables the real-time decoding of six hand gestures, outperforming classical decoders and recurrent neural networks. The proposed dual-layer framework achieves multi-gesture decoding solely from epidural EEG signals, paving the way for the development of flexible and robust BCI control of hand movement.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Brain-Computer Interfaces
*Quadriplegia/physiopathology
*Gestures
*Hand/physiopathology/physiology
*Wireless Technology
Algorithms
Electroencephalography
RevDate: 2025-12-03
CmpDate: 2025-12-03
MI-LTN: A Neurosymbolic Framework for Enhanced EEG Feature Extraction and Model Interpretability in MI-BCI.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-4.
Brain-Computer Interface (BCI) is a cutting-edge technology that facilitates human-computer interaction. Motor Imagery Electroencephalogram (MI-EEG) decoding technology has emerged as a significant direction in BCI research. Despite the remarkable advancements in deep learning for EEG signal decoding in recent years, two major challenges persist: the comprehensive representation and extraction of features, and the lack of interpretability. To address these issues, we propose a novel neurosymbolic framework termed MI-LTN (Motor Imagery Logic Tensor Network), incorporate logical constraints into the training model using the Logic Tensor Network (LTN) and employ Shapley values to evaluate and adjust the importance of channels. Our experimental results show that MI-LTN achieves classification accuracies of 86.00% and 88.84% on the BCI IV 2a and BCI IV 2b datasets, respectively. These results demonstrate the great potential of LTN in MI-EEG decoding.
Additional Links: PMID-41336923
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336923,
year = {2025},
author = {Chen, X and Peng, Y and Li, C and Pan, Y and Ding, N and Zhang, S},
title = {MI-LTN: A Neurosymbolic Framework for Enhanced EEG Feature Extraction and Model Interpretability in MI-BCI.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-4},
doi = {10.1109/EMBC58623.2025.11252655},
pmid = {41336923},
issn = {2694-0604},
mesh = {*Electroencephalography/methods ; *Brain-Computer Interfaces ; Humans ; *Signal Processing, Computer-Assisted ; Algorithms ; },
abstract = {Brain-Computer Interface (BCI) is a cutting-edge technology that facilitates human-computer interaction. Motor Imagery Electroencephalogram (MI-EEG) decoding technology has emerged as a significant direction in BCI research. Despite the remarkable advancements in deep learning for EEG signal decoding in recent years, two major challenges persist: the comprehensive representation and extraction of features, and the lack of interpretability. To address these issues, we propose a novel neurosymbolic framework termed MI-LTN (Motor Imagery Logic Tensor Network), incorporate logical constraints into the training model using the Logic Tensor Network (LTN) and employ Shapley values to evaluate and adjust the importance of channels. Our experimental results show that MI-LTN achieves classification accuracies of 86.00% and 88.84% on the BCI IV 2a and BCI IV 2b datasets, respectively. These results demonstrate the great potential of LTN in MI-EEG decoding.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Electroencephalography/methods
*Brain-Computer Interfaces
Humans
*Signal Processing, Computer-Assisted
Algorithms
RevDate: 2025-12-03
CmpDate: 2025-12-03
Electrophysiological Characterisation of Commercial Ear-EEG Devices.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-4.
Ear-EEG devices are advanced wearables revolutionizing EEG technology by combining comfort and portability. With the increasing availability of commercial ear-EEG devices, there is a need for an independent characterisation of the electrophysiological performance to guide users and researchers. Here, we evaluate the performance of the IDUN Guardian Earbuds (IGEB, IDUN Technologies AG) by analysing electrophysiological responses to several well-established EEG paradigms, including event-related potentials (ERPs), auditory steady-state response (ASSR), steady-state visually evoked potential (SSVEP), and alpha block, and comparing them to standard scalp-based EEG recordings acquired simultaneously from eight participants utilizing a validation toolkit previously developed in our lab. Results indicate that the in-ear device is capable of detecting SSVEPs. However, we did not observe ERPs, ASSRs, or alpha blocking. Simulating in-ear EEG with electrode T8 referenced to T7 slightly improved the quality of the signal, which was further enhanced with midline reference electrodes.Clinical Relevance- Characterising this technology marks a step forward providing independent assessment of commercially available devices in view of expanding EEG applications, from long-term monitoring and wearable health solutions to advanced brain-machine interfaces (BMI).
Additional Links: PMID-41336899
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336899,
year = {2025},
author = {Bradshaw Bernacchi, JK and Lopez Valdes, A},
title = {Electrophysiological Characterisation of Commercial Ear-EEG Devices.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-4},
doi = {10.1109/EMBC58623.2025.11252639},
pmid = {41336899},
issn = {2694-0604},
mesh = {*Electroencephalography/instrumentation ; Humans ; *Ear/physiology ; Adult ; Male ; Female ; Wearable Electronic Devices ; Electrodes ; Brain-Computer Interfaces ; Equipment Design ; Young Adult ; Signal Processing, Computer-Assisted ; },
abstract = {Ear-EEG devices are advanced wearables revolutionizing EEG technology by combining comfort and portability. With the increasing availability of commercial ear-EEG devices, there is a need for an independent characterisation of the electrophysiological performance to guide users and researchers. Here, we evaluate the performance of the IDUN Guardian Earbuds (IGEB, IDUN Technologies AG) by analysing electrophysiological responses to several well-established EEG paradigms, including event-related potentials (ERPs), auditory steady-state response (ASSR), steady-state visually evoked potential (SSVEP), and alpha block, and comparing them to standard scalp-based EEG recordings acquired simultaneously from eight participants utilizing a validation toolkit previously developed in our lab. Results indicate that the in-ear device is capable of detecting SSVEPs. However, we did not observe ERPs, ASSRs, or alpha blocking. Simulating in-ear EEG with electrode T8 referenced to T7 slightly improved the quality of the signal, which was further enhanced with midline reference electrodes.Clinical Relevance- Characterising this technology marks a step forward providing independent assessment of commercially available devices in view of expanding EEG applications, from long-term monitoring and wearable health solutions to advanced brain-machine interfaces (BMI).},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Electroencephalography/instrumentation
Humans
*Ear/physiology
Adult
Male
Female
Wearable Electronic Devices
Electrodes
Brain-Computer Interfaces
Equipment Design
Young Adult
Signal Processing, Computer-Assisted
RevDate: 2025-12-03
CmpDate: 2025-12-03
Decoding Attention through EEG: Paving the Way for BCI Applications in Attention-Related Disorders.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
This study investigates attention-related traits in EEG signals to assess the potential of Electroencephalography (EEG) as an objective diagnostic tool for attention-related disorders such as ADHD, anxiety, and learning disabilities. EEG data were collected from 31 participants, including individuals with ADHD, while they performed a Go/No-Go task designed to evaluate attention and impulsivity. The analysis focused on the spectral characteristics of brain activity, examining the relative power of theta, alpha, and beta frequency bands, along with the theta-to-beta ratio (TBR), to identify distinguishing patterns of attention-related brain activity. Results indicate that the ADHD group exhibited higher theta power and consistently elevated TBR, particularly in the Frontal, Temporal, and Occipital brain regions. Machine learning models, such as K-Nearest Neighbors, effectively classified ADHD and Control groups based on TBR with high accuracy. Additionally, the ADHD group demonstrated faster reaction times but made more errors on the Go/No-Go task, highlighting difficulties with sustained attention. These findings suggest that this approach holds promise for developing objective diagnostic tools for attention-related disorders. While some limitations exist, this study underscores the potential of integrating EEG with machine learning to create brain-computer interface (BCI) systems for assessing attention processes.
Additional Links: PMID-41336877
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336877,
year = {2025},
author = {Torgersen, EL and Ragnarson, I and Molinas, M},
title = {Decoding Attention through EEG: Paving the Way for BCI Applications in Attention-Related Disorders.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11252840},
pmid = {41336877},
issn = {2694-0604},
mesh = {Humans ; *Electroencephalography/methods ; *Attention Deficit Disorder with Hyperactivity/physiopathology/diagnosis ; *Attention/physiology ; *Brain-Computer Interfaces ; Male ; Female ; Adult ; Machine Learning ; Young Adult ; Brain/physiopathology ; },
abstract = {This study investigates attention-related traits in EEG signals to assess the potential of Electroencephalography (EEG) as an objective diagnostic tool for attention-related disorders such as ADHD, anxiety, and learning disabilities. EEG data were collected from 31 participants, including individuals with ADHD, while they performed a Go/No-Go task designed to evaluate attention and impulsivity. The analysis focused on the spectral characteristics of brain activity, examining the relative power of theta, alpha, and beta frequency bands, along with the theta-to-beta ratio (TBR), to identify distinguishing patterns of attention-related brain activity. Results indicate that the ADHD group exhibited higher theta power and consistently elevated TBR, particularly in the Frontal, Temporal, and Occipital brain regions. Machine learning models, such as K-Nearest Neighbors, effectively classified ADHD and Control groups based on TBR with high accuracy. Additionally, the ADHD group demonstrated faster reaction times but made more errors on the Go/No-Go task, highlighting difficulties with sustained attention. These findings suggest that this approach holds promise for developing objective diagnostic tools for attention-related disorders. While some limitations exist, this study underscores the potential of integrating EEG with machine learning to create brain-computer interface (BCI) systems for assessing attention processes.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Electroencephalography/methods
*Attention Deficit Disorder with Hyperactivity/physiopathology/diagnosis
*Attention/physiology
*Brain-Computer Interfaces
Male
Female
Adult
Machine Learning
Young Adult
Brain/physiopathology
RevDate: 2025-12-03
CmpDate: 2025-12-03
XAGnet: Cross-Attention Graph Network for Detecting Auditory Attention in Ear-EEG Signals.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-6.
Auditory Attention Detection (AAD) is essential for developing advanced brain-computer interfaces including neuro-steered hearing technologies capable of functioning in complex auditory environments. In this study, we propose XAGnet, a novel method that leverages ear-centered EEG (ear-EEG) data to model both intra-ear and inter-ear neural dependencies for detection of auditory attention to one of the spatial locations. Specifically, Graph Convolutional Networks (GCNs) are applied separately to left and right ear-EEG signals to extract spatial features from each side for intra-ear interactions. A cross-attention mechanism is then introduced to model inter-ear interactions between the left and right ears. The attended features are combined for multi-class classification, with each class representing a speaker or a speaking location. We evaluate our method on a publicly available ear-EEG dataset, involving AAD tasks with four speakers. Experimental results demonstrate that XAGnet outperforms baseline models, highlighting the effectiveness of modeling both intra-ear and inter-ear dependencies in AAD tasks.
Additional Links: PMID-41336846
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336846,
year = {2025},
author = {Pahuja, S and Ivucic, G and Cai, S and De Silva, D and Li, H and Schultz, T},
title = {XAGnet: Cross-Attention Graph Network for Detecting Auditory Attention in Ear-EEG Signals.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-6},
doi = {10.1109/EMBC58623.2025.11252872},
pmid = {41336846},
issn = {2694-0604},
mesh = {*Electroencephalography/methods ; Humans ; *Attention/physiology ; *Signal Processing, Computer-Assisted ; *Ear/physiology ; Algorithms ; *Neural Networks, Computer ; Brain-Computer Interfaces ; },
abstract = {Auditory Attention Detection (AAD) is essential for developing advanced brain-computer interfaces including neuro-steered hearing technologies capable of functioning in complex auditory environments. In this study, we propose XAGnet, a novel method that leverages ear-centered EEG (ear-EEG) data to model both intra-ear and inter-ear neural dependencies for detection of auditory attention to one of the spatial locations. Specifically, Graph Convolutional Networks (GCNs) are applied separately to left and right ear-EEG signals to extract spatial features from each side for intra-ear interactions. A cross-attention mechanism is then introduced to model inter-ear interactions between the left and right ears. The attended features are combined for multi-class classification, with each class representing a speaker or a speaking location. We evaluate our method on a publicly available ear-EEG dataset, involving AAD tasks with four speakers. Experimental results demonstrate that XAGnet outperforms baseline models, highlighting the effectiveness of modeling both intra-ear and inter-ear dependencies in AAD tasks.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Electroencephalography/methods
Humans
*Attention/physiology
*Signal Processing, Computer-Assisted
*Ear/physiology
Algorithms
*Neural Networks, Computer
Brain-Computer Interfaces
RevDate: 2025-12-03
CmpDate: 2025-12-03
Hybrid CNN-Transformer Model for Accurate Classification of Human Attention Levels Using Workplace EEG Data.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-6.
Accurately detecting human attention levels is a key challenge in cognitive neuroscience, with broad application value in improving productivity. Although Electroencephalography (EEG) signals are often used to study cognitive states, most studies still rely on data collected in controlled laboratory environments. This paper collects EEG data from employees during their daily work using a commercial single-channel EEG headband, making attention detection closer to real-world applications and increasing its feasibility and promotion value. We propose a new classification method based on a multi-head attention transformer to identify six different attention levels. We first perform a Short-Time Fourier Transform (STFT) on the EEG signal. Subsequently, we constructed a transformer architecture to effectively model long-range dependencies and subtle pattern changes in EEG data using self-attention and stacked encoder layers. Experimental results show that our proposed model achieves 87.37% classification accuracy in the six-level attention classification task, outperforming traditional high-performance methods and demonstrating superior performance compared to existing similar approaches. This achievement not only verifies the potential of the transformer architecture in EEG attention level classification but also provides new possibilities for developing advanced tools in fields such as brain-computer interface (BCI) and cognitive monitoring.
Additional Links: PMID-41336840
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336840,
year = {2025},
author = {Jahanjoo, A and Wei, Y and Haghi, M and Schorpf, P and TaheriNejad, N},
title = {Hybrid CNN-Transformer Model for Accurate Classification of Human Attention Levels Using Workplace EEG Data.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-6},
doi = {10.1109/EMBC58623.2025.11251604},
pmid = {41336840},
issn = {2694-0604},
mesh = {Humans ; *Electroencephalography/methods ; *Attention/physiology ; *Neural Networks, Computer ; *Workplace ; Signal Processing, Computer-Assisted ; Algorithms ; Brain-Computer Interfaces ; Fourier Analysis ; },
abstract = {Accurately detecting human attention levels is a key challenge in cognitive neuroscience, with broad application value in improving productivity. Although Electroencephalography (EEG) signals are often used to study cognitive states, most studies still rely on data collected in controlled laboratory environments. This paper collects EEG data from employees during their daily work using a commercial single-channel EEG headband, making attention detection closer to real-world applications and increasing its feasibility and promotion value. We propose a new classification method based on a multi-head attention transformer to identify six different attention levels. We first perform a Short-Time Fourier Transform (STFT) on the EEG signal. Subsequently, we constructed a transformer architecture to effectively model long-range dependencies and subtle pattern changes in EEG data using self-attention and stacked encoder layers. Experimental results show that our proposed model achieves 87.37% classification accuracy in the six-level attention classification task, outperforming traditional high-performance methods and demonstrating superior performance compared to existing similar approaches. This achievement not only verifies the potential of the transformer architecture in EEG attention level classification but also provides new possibilities for developing advanced tools in fields such as brain-computer interface (BCI) and cognitive monitoring.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Electroencephalography/methods
*Attention/physiology
*Neural Networks, Computer
*Workplace
Signal Processing, Computer-Assisted
Algorithms
Brain-Computer Interfaces
Fourier Analysis
RevDate: 2025-12-03
CmpDate: 2025-12-03
Design of an Asynchronous BMI with Interpretable Neural Networks for Exoskeleton Control: A Proof of Concept on Data Evolution and Scalability Over One Week.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
This paper presents a concept study of a week-long experimental protocol for controlling a lower-limb exoskeleton via a brain-machine interface. The system employed a neural network adapted from EEGNet that distinguishes motor imagery and resting states in a two-dimensional space under both static and movement conditions. Each day, the model was fine-tuned with that day's training data as well as data from previous days. Daily closed-loop asynchronous evaluations were carried out to assess real-time exoskeleton control performance. The results indicate steady improvements in system accuracy over the week, likely due to the cumulative integration of additional data, which enhanced the neural network-based approach to cognitive state classification in a multi-day setting.Clinical relevance-Incorporating repetitive robotic therapies in which the patient can actively engage in rehabilitation is a core goal of neurorehabilitation. Developing non-invasive brain-machine interfaces that enable an increasingly effective mind-robot connection is of great importance. This work outlines a protocol for creating a brain-machine interface controlled by motor imagery.
Additional Links: PMID-41336806
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336806,
year = {2025},
author = {Quiles, V and Polo-Hortiguela, C and Soriano-Segura, P and Ortiz, M and Ianez, E and Azorin, JM},
title = {Design of an Asynchronous BMI with Interpretable Neural Networks for Exoskeleton Control: A Proof of Concept on Data Evolution and Scalability Over One Week.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11251541},
pmid = {41336806},
issn = {2694-0604},
mesh = {Humans ; *Exoskeleton Device ; *Neural Networks, Computer ; *Brain-Computer Interfaces ; Robotics ; Equipment Design ; },
abstract = {This paper presents a concept study of a week-long experimental protocol for controlling a lower-limb exoskeleton via a brain-machine interface. The system employed a neural network adapted from EEGNet that distinguishes motor imagery and resting states in a two-dimensional space under both static and movement conditions. Each day, the model was fine-tuned with that day's training data as well as data from previous days. Daily closed-loop asynchronous evaluations were carried out to assess real-time exoskeleton control performance. The results indicate steady improvements in system accuracy over the week, likely due to the cumulative integration of additional data, which enhanced the neural network-based approach to cognitive state classification in a multi-day setting.Clinical relevance-Incorporating repetitive robotic therapies in which the patient can actively engage in rehabilitation is a core goal of neurorehabilitation. Developing non-invasive brain-machine interfaces that enable an increasingly effective mind-robot connection is of great importance. This work outlines a protocol for creating a brain-machine interface controlled by motor imagery.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Exoskeleton Device
*Neural Networks, Computer
*Brain-Computer Interfaces
Robotics
Equipment Design
RevDate: 2025-12-03
CmpDate: 2025-12-03
Decoding Hybrid EEG-fNIRS Upper Limb Motor Execution with Capsule Dynamic Graph Convolutional Neural Network.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
In this study, we proposed a capsule dynamic graph convolution network (EF-CapsDGCN) for accurate decoding of upper limb motor execution (ME) based on both electroencephalogram (EEG) and functional near-infrared spectroscopy (fNIRS) signals. In EF-CapsDGCN, EEG/fNIRS features are extracted using the same convolutional architecture but different parameter settings. The extracted features from both modalities are then dynamically routed to capsules. Afterwards, the single-modality capsules are concatenated to form EEG-fNIRS multimodal capsules. Each capsule is treated as a graph node, and hidden feature representations are learned through dynamic graph convolution. Finally, after concatenating the original capsules with the learned hidden features, the combined features are passed through multi-head self-attention and then flattened to feed into a fully connected layer for classification. Compared to current state-of-the-art methods such as ANN, DeepConvNet, DNN, and EF-Net, the proposed method demonstrated superior classification performance on the multimodal EEG-fNIRS dataset HYGRIP. Furthermore, our model achieves at least 8% higher classification accuracy in multimodal EEG-fNIRS compared to single modality EEG/fNIRS. These results demonstrate the potential of capsule dynamic graph convolution for the multimodal fusion of EEG and fNIRS. The proposed model is promising for accurately decoding motor execution-based brain computer interfaces with EEG-fNIRS multiple signals. Overall, this study provides an effective solution for multimodal-BCI decoding.Clinical Relevance- This study demonstrates that integrating EEG and fNIRS signals via a capsule dynamic graph convolution network (EF-CapsDGCN) improves upper limb motor execution decoding accuracy by at least 8% compared to single-modality approaches, offering clinicians a more reliable tool for developing brain-computer interface systems to enhance rehabilitation or assistive device control in patients with motor impairments.
Additional Links: PMID-41336805
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336805,
year = {2025},
author = {Yuan, Z and Li, Y and Zhang, H and Liu, X and Li, S and Zhu, Y and Wang, H and Li, J and Wang, H},
title = {Decoding Hybrid EEG-fNIRS Upper Limb Motor Execution with Capsule Dynamic Graph Convolutional Neural Network.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11251587},
pmid = {41336805},
issn = {2694-0604},
mesh = {Humans ; *Electroencephalography/methods ; Spectroscopy, Near-Infrared/methods ; *Neural Networks, Computer ; *Upper Extremity/physiology ; Signal Processing, Computer-Assisted ; Algorithms ; Brain-Computer Interfaces ; Convolutional Neural Networks ; },
abstract = {In this study, we proposed a capsule dynamic graph convolution network (EF-CapsDGCN) for accurate decoding of upper limb motor execution (ME) based on both electroencephalogram (EEG) and functional near-infrared spectroscopy (fNIRS) signals. In EF-CapsDGCN, EEG/fNIRS features are extracted using the same convolutional architecture but different parameter settings. The extracted features from both modalities are then dynamically routed to capsules. Afterwards, the single-modality capsules are concatenated to form EEG-fNIRS multimodal capsules. Each capsule is treated as a graph node, and hidden feature representations are learned through dynamic graph convolution. Finally, after concatenating the original capsules with the learned hidden features, the combined features are passed through multi-head self-attention and then flattened to feed into a fully connected layer for classification. Compared to current state-of-the-art methods such as ANN, DeepConvNet, DNN, and EF-Net, the proposed method demonstrated superior classification performance on the multimodal EEG-fNIRS dataset HYGRIP. Furthermore, our model achieves at least 8% higher classification accuracy in multimodal EEG-fNIRS compared to single modality EEG/fNIRS. These results demonstrate the potential of capsule dynamic graph convolution for the multimodal fusion of EEG and fNIRS. The proposed model is promising for accurately decoding motor execution-based brain computer interfaces with EEG-fNIRS multiple signals. Overall, this study provides an effective solution for multimodal-BCI decoding.Clinical Relevance- This study demonstrates that integrating EEG and fNIRS signals via a capsule dynamic graph convolution network (EF-CapsDGCN) improves upper limb motor execution decoding accuracy by at least 8% compared to single-modality approaches, offering clinicians a more reliable tool for developing brain-computer interface systems to enhance rehabilitation or assistive device control in patients with motor impairments.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Electroencephalography/methods
Spectroscopy, Near-Infrared/methods
*Neural Networks, Computer
*Upper Extremity/physiology
Signal Processing, Computer-Assisted
Algorithms
Brain-Computer Interfaces
Convolutional Neural Networks
RevDate: 2025-12-03
CmpDate: 2025-12-03
Quantifying Inter- and Intra-Subject Variability of Sensorimotor Desynchronization Induced by Median Nerve Stimulation and Motor Imagery for BCI.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
Motor Imagery-based Brain-Computer Interfaces (MI-BCIs) enable users to control external devices by interpreting sensorimotor activity recorded via ElectroEncephaloGraphy (EEG). Median Nerve Stimulation (MNS) has recently emerged as a promising alternative motor task for BCI applications. However, intra- and inter-subject EEG variability remains a major challenge, affecting BCI system reliability. While variability is a well-known issue, its precise sources and impact on different EEG patterns remain unclear, with a lack of formal and quantitative studies of BCI variability. Thus, this study quantifies intra- and inter-subject variability in MNS-induced sensorimotor desynchronization (ERD) and compares it with that of MI. Results show that MI elicits stronger ERD with lower intra-subject variability, suggesting more consistent activation patterns, while inter-subject variability is similar between tasks. Additionally, the variability of classification accuracies based on Riemannian geometry exhibits a similar trend. These findings provide insights into EEG variability and its implications for BCI design. Identifying stable neural patterns could improve MI- and MNS-based BCIs, particularly for applications such as intraoperative awareness monitoring.
Additional Links: PMID-41336758
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336758,
year = {2025},
author = {Cueva, VM and Lotte, F and Bougrain, L and Rimbert, S},
title = {Quantifying Inter- and Intra-Subject Variability of Sensorimotor Desynchronization Induced by Median Nerve Stimulation and Motor Imagery for BCI.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11254356},
pmid = {41336758},
issn = {2694-0604},
mesh = {Humans ; *Brain-Computer Interfaces ; Electroencephalography/methods ; *Median Nerve/physiology ; Male ; *Imagination/physiology ; Adult ; Female ; Electric Stimulation ; *Sensorimotor Cortex/physiology ; },
abstract = {Motor Imagery-based Brain-Computer Interfaces (MI-BCIs) enable users to control external devices by interpreting sensorimotor activity recorded via ElectroEncephaloGraphy (EEG). Median Nerve Stimulation (MNS) has recently emerged as a promising alternative motor task for BCI applications. However, intra- and inter-subject EEG variability remains a major challenge, affecting BCI system reliability. While variability is a well-known issue, its precise sources and impact on different EEG patterns remain unclear, with a lack of formal and quantitative studies of BCI variability. Thus, this study quantifies intra- and inter-subject variability in MNS-induced sensorimotor desynchronization (ERD) and compares it with that of MI. Results show that MI elicits stronger ERD with lower intra-subject variability, suggesting more consistent activation patterns, while inter-subject variability is similar between tasks. Additionally, the variability of classification accuracies based on Riemannian geometry exhibits a similar trend. These findings provide insights into EEG variability and its implications for BCI design. Identifying stable neural patterns could improve MI- and MNS-based BCIs, particularly for applications such as intraoperative awareness monitoring.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Brain-Computer Interfaces
Electroencephalography/methods
*Median Nerve/physiology
Male
*Imagination/physiology
Adult
Female
Electric Stimulation
*Sensorimotor Cortex/physiology
RevDate: 2025-12-03
CmpDate: 2025-12-03
fNIRS Based Comparative Study of Classifiers and Feature Selection Techniques for Finger Tapping.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-6.
This study seeks to classify five-finger movements using machine learning (ML) algorithms. It also examines how feature optimization methods affect classification performance. The signals of functional near-infrared spectroscopy (fNIRS) were acquired from 20 healthy participants as they performed five different finger movements. The recorded signals are represented by a total of 17 spatial features such as kurtosis, variance, mean, skewness and others. The ML classifiers used in the beginning are Support Vector Machine (SVM) and Extreme Gradient Boosting (XGBoost). Their performance parameters including precision, accuracy, F1-score, recall and processing time are recorded initially for the dataset comprising of all the features. Afterwards, three population-based metaheuristic algorithms Genetic Algorithm (GA), Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) are used to determine the top features from the dataset. The same ML classifiers are then applied to the selected feature datasets. Classification performance is significantly improved by optimized features, with GA and PSO outperforming ACO. SVM is beaten by XGBoost, while its accuracy (94.94%) is greatest when adopting GA-optimized features. The study also shows the role played by feature selection in improving the efficiency and accuracy of ML models in neuroimaging applications. It also suggests optimized classification pipelines for brain-computer interface systems.
Additional Links: PMID-41336716
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336716,
year = {2025},
author = {Abid, U and Zulfiqar, O and Nazeer, H and Naseer, N and Bo, APL and Khan, H},
title = {fNIRS Based Comparative Study of Classifiers and Feature Selection Techniques for Finger Tapping.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-6},
doi = {10.1109/EMBC58623.2025.11254285},
pmid = {41336716},
issn = {2694-0604},
mesh = {Humans ; Spectroscopy, Near-Infrared/methods ; *Fingers/physiology ; Algorithms ; Support Vector Machine ; Male ; Movement/physiology ; Machine Learning ; Adult ; Female ; },
abstract = {This study seeks to classify five-finger movements using machine learning (ML) algorithms. It also examines how feature optimization methods affect classification performance. The signals of functional near-infrared spectroscopy (fNIRS) were acquired from 20 healthy participants as they performed five different finger movements. The recorded signals are represented by a total of 17 spatial features such as kurtosis, variance, mean, skewness and others. The ML classifiers used in the beginning are Support Vector Machine (SVM) and Extreme Gradient Boosting (XGBoost). Their performance parameters including precision, accuracy, F1-score, recall and processing time are recorded initially for the dataset comprising of all the features. Afterwards, three population-based metaheuristic algorithms Genetic Algorithm (GA), Ant Colony Optimization (ACO) and Particle Swarm Optimization (PSO) are used to determine the top features from the dataset. The same ML classifiers are then applied to the selected feature datasets. Classification performance is significantly improved by optimized features, with GA and PSO outperforming ACO. SVM is beaten by XGBoost, while its accuracy (94.94%) is greatest when adopting GA-optimized features. The study also shows the role played by feature selection in improving the efficiency and accuracy of ML models in neuroimaging applications. It also suggests optimized classification pipelines for brain-computer interface systems.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
Spectroscopy, Near-Infrared/methods
*Fingers/physiology
Algorithms
Support Vector Machine
Male
Movement/physiology
Machine Learning
Adult
Female
RevDate: 2025-12-03
CmpDate: 2025-12-03
RISE-iEEG: Robust to Inter-Subject Electrodes Implantation Variability iEEG Classifier.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
Intracranial electroencephalography (iEEG) is increasingly used for clinical and brain-computer interface applications due to its high spatial and temporal resolution. However, inter-subject variability in electrode implantation poses a challenge for developing generalized neural decoders. To address this, we introduce a novel decoder model that is robust to inter-subject electrode implantation variability. We call this model RISE-iEEG, which stands for Robust to Inter-Subject Electrode Implantation Variability iEEG Classifier. RISE-iEEG employs a deep neural network structure preceded by a participant-specific projection network. The projection network maps the neural data of individual participants onto a common low-dimensional space, compensating for the implantation variability. In other words, we developed an iEEG decoder model that can be applied across multiple participants' data without requiring the coordinates of electrode for each participant. The performance of RISE-iEEG across multiple datasets, including the Music Reconstruction dataset, and AJILE12 dataset, surpasses that of advanced iEEG decoder models such as HTNet and EEGNet. Our analysis shows that the performance of RISE-iEEG is about 7% higher than that of HTNet and EEGNet in terms of F1 score, with an average F1 score of 0.83, which is the highest result among the evaluation methods defined. Furthermore, Our analysis of the projection network weights reveals that the Superior Temporal and Postcentral lobes are key encoding nodes for the Music Reconstruction and AJILE12 datasets, which aligns with the primary physiological principles governing these regions. This model improves decoding accuracy while maintaining interpretability and generalization.
Additional Links: PMID-41336656
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336656,
year = {2025},
author = {Memar, MO and Ziaei, N and Nazari, B and Yousefi, A},
title = {RISE-iEEG: Robust to Inter-Subject Electrodes Implantation Variability iEEG Classifier.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11252788},
pmid = {41336656},
issn = {2694-0604},
mesh = {Humans ; *Electroencephalography/methods/instrumentation ; Neural Networks, Computer ; Brain-Computer Interfaces ; *Electrodes, Implanted ; Algorithms ; Signal Processing, Computer-Assisted ; },
abstract = {Intracranial electroencephalography (iEEG) is increasingly used for clinical and brain-computer interface applications due to its high spatial and temporal resolution. However, inter-subject variability in electrode implantation poses a challenge for developing generalized neural decoders. To address this, we introduce a novel decoder model that is robust to inter-subject electrode implantation variability. We call this model RISE-iEEG, which stands for Robust to Inter-Subject Electrode Implantation Variability iEEG Classifier. RISE-iEEG employs a deep neural network structure preceded by a participant-specific projection network. The projection network maps the neural data of individual participants onto a common low-dimensional space, compensating for the implantation variability. In other words, we developed an iEEG decoder model that can be applied across multiple participants' data without requiring the coordinates of electrode for each participant. The performance of RISE-iEEG across multiple datasets, including the Music Reconstruction dataset, and AJILE12 dataset, surpasses that of advanced iEEG decoder models such as HTNet and EEGNet. Our analysis shows that the performance of RISE-iEEG is about 7% higher than that of HTNet and EEGNet in terms of F1 score, with an average F1 score of 0.83, which is the highest result among the evaluation methods defined. Furthermore, Our analysis of the projection network weights reveals that the Superior Temporal and Postcentral lobes are key encoding nodes for the Music Reconstruction and AJILE12 datasets, which aligns with the primary physiological principles governing these regions. This model improves decoding accuracy while maintaining interpretability and generalization.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Electroencephalography/methods/instrumentation
Neural Networks, Computer
Brain-Computer Interfaces
*Electrodes, Implanted
Algorithms
Signal Processing, Computer-Assisted
RevDate: 2025-12-03
CmpDate: 2025-12-03
Sub-Group Partition Strategy for RSVP-based Collaborative Brain-Computer Interfaces.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-5.
Collaborative brain-computer interfaces (cBCIs) have demonstrated significant improvements in single-trial electroencephalogram (EEG) classification performance in rapid serial visual presentation (RSVP) tasks. However, it remains unclear how to effectively organize multiple collaborators into sub-groups to optimize system performance. This study introduces a novel sub-group partition strategy for RSVP-based cBCI systems. We first developed intra-individual and inter-individual neural response reproducibility (IINRR) as a metric to estimate subgroup capability in RSVP tasks. Based on this metric, we propose an IINRR-based partition strategy to optimize sub-group composition. Additionally, we introduce a metric called collaborative information processing rate (CIPR) to evaluate overall system performance. Our experiments verified the effectiveness of the proposed strategy on a public RSVP-based cBCI dataset. The results showed that our strategy consistently outperformed random partitioning in both within-session and cross-session scenarios, achieving higher classification performance and system efficiency. These findings suggest the strategy's potential for optimizing group mode in practical RSVP-based cBCI applications.
Additional Links: PMID-41336644
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336644,
year = {2025},
author = {Si, Y and Wang, Z and Zhao, X and Xu, T and Zhou, T and Hu, H},
title = {Sub-Group Partition Strategy for RSVP-based Collaborative Brain-Computer Interfaces.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-5},
doi = {10.1109/EMBC58623.2025.11252828},
pmid = {41336644},
issn = {2694-0604},
mesh = {*Brain-Computer Interfaces ; Humans ; *Electroencephalography/methods ; Algorithms ; Signal Processing, Computer-Assisted ; Reproducibility of Results ; },
abstract = {Collaborative brain-computer interfaces (cBCIs) have demonstrated significant improvements in single-trial electroencephalogram (EEG) classification performance in rapid serial visual presentation (RSVP) tasks. However, it remains unclear how to effectively organize multiple collaborators into sub-groups to optimize system performance. This study introduces a novel sub-group partition strategy for RSVP-based cBCI systems. We first developed intra-individual and inter-individual neural response reproducibility (IINRR) as a metric to estimate subgroup capability in RSVP tasks. Based on this metric, we propose an IINRR-based partition strategy to optimize sub-group composition. Additionally, we introduce a metric called collaborative information processing rate (CIPR) to evaluate overall system performance. Our experiments verified the effectiveness of the proposed strategy on a public RSVP-based cBCI dataset. The results showed that our strategy consistently outperformed random partitioning in both within-session and cross-session scenarios, achieving higher classification performance and system efficiency. These findings suggest the strategy's potential for optimizing group mode in practical RSVP-based cBCI applications.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Brain-Computer Interfaces
Humans
*Electroencephalography/methods
Algorithms
Signal Processing, Computer-Assisted
Reproducibility of Results
RevDate: 2025-12-03
CmpDate: 2025-12-03
Medial Wall's Potential in Enhancing Finger Movement Decoding from Electrocorticography (ECoG): A Single-Subject Pilot Study.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-6.
The next generation of motor brain-computer interfaces (BCIs) will likely benefit from integrating recordings from multiple motor-related brain regions. Among these is the medial wall, yet it remains relatively understudied in the case of finger movement decoding. Using electrocorticographic (ECoG) recordings from a subject implanted both over medial and lateral cortical areas, we first assessed the medial wall's potential for multiclass classification (5 fingers + rest). We achieved a six-class accuracy of 0.46, significantly above chance, with rest trials classified most accurately, followed by thumb movement trials. Several frequency features contributed to decoding, with Local Motor Potentials (LMP) being the most influential one, with distinctive activity already prior to movement onset, and power in the α (8-12 Hz) band aiding in decoding rest trials over finger movement trials. Next, we explored whether combining the best medial wall channel with lateral cortical channels could improve decoding performance. We found a significant accuracy improvement for most lateral channels (from an average of 0.36 to 0.42), except for the channel closest to the finger primary motor region, whose accuracy was already high (0.77). These findings highlight the medial wall's potential for motor decoding and its value as a target region for future motor BCIs, especially for individuals with impaired hand motor areas.
Additional Links: PMID-41336643
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336643,
year = {2025},
author = {Merino, EC and Sun, Q and Dauwe, I and Carrette, E and Meurs, A and Van Roost, D and Boon, P and Van Hulle, MM},
title = {Medial Wall's Potential in Enhancing Finger Movement Decoding from Electrocorticography (ECoG): A Single-Subject Pilot Study.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-6},
doi = {10.1109/EMBC58623.2025.11252768},
pmid = {41336643},
issn = {2694-0604},
mesh = {Humans ; *Electrocorticography/methods ; *Fingers/physiology ; Pilot Projects ; Movement/physiology ; *Brain-Computer Interfaces ; *Motor Cortex/physiology ; Male ; Adult ; },
abstract = {The next generation of motor brain-computer interfaces (BCIs) will likely benefit from integrating recordings from multiple motor-related brain regions. Among these is the medial wall, yet it remains relatively understudied in the case of finger movement decoding. Using electrocorticographic (ECoG) recordings from a subject implanted both over medial and lateral cortical areas, we first assessed the medial wall's potential for multiclass classification (5 fingers + rest). We achieved a six-class accuracy of 0.46, significantly above chance, with rest trials classified most accurately, followed by thumb movement trials. Several frequency features contributed to decoding, with Local Motor Potentials (LMP) being the most influential one, with distinctive activity already prior to movement onset, and power in the α (8-12 Hz) band aiding in decoding rest trials over finger movement trials. Next, we explored whether combining the best medial wall channel with lateral cortical channels could improve decoding performance. We found a significant accuracy improvement for most lateral channels (from an average of 0.36 to 0.42), except for the channel closest to the finger primary motor region, whose accuracy was already high (0.77). These findings highlight the medial wall's potential for motor decoding and its value as a target region for future motor BCIs, especially for individuals with impaired hand motor areas.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Electrocorticography/methods
*Fingers/physiology
Pilot Projects
Movement/physiology
*Brain-Computer Interfaces
*Motor Cortex/physiology
Male
Adult
RevDate: 2025-12-03
CmpDate: 2025-12-03
Classification of Functional Near-Infrared Spectroscopy Based on Gramian Angular Difference Field and a Temporal-Spatial Feature Fusion Network.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-5.
Functional near-infrared spectroscopy (fNIRS) is a non-invasive functional neuroimaging technique widely employed in brain-computer interface (BCI) research and diverse clinical applications. The key challenge in fNIRS applications lies in extracting nonlinear structures and complex patterns from one-dimensional time series data. Gramian angular difference field (GADF) transforms one-dimensional time series into two-dimensional images, providing effective feature representation for subsequent signal classification. However, most studies have not explored the combined effects of image features and time series features. In this paper, we propose a deep learning model, VisiTempNet, which integrates both time series and GADF image features in a temporal-spatial fusion approach. The model first performs convolution on time series data based on delayed hemodynamic responses to highlight key features. It then separates the feature extraction process into two parallel modules, and normalizes and fuses these features with learnable weights, assigning greater importance to the most relevant information for classification. Experimental results show that our model achieved an accuracy of 76.65±2.43% on the open access fNIRS2MW dataset, outperforming all baseline models. This validates the effectiveness of combining image and time series features and demonstrates the superiority of the proposed model.
Additional Links: PMID-41336630
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336630,
year = {2025},
author = {Wen, Y and An, Y and Chu, M and Chen, S and Lu, X and Guo, H and Yu, J},
title = {Classification of Functional Near-Infrared Spectroscopy Based on Gramian Angular Difference Field and a Temporal-Spatial Feature Fusion Network.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-5},
doi = {10.1109/EMBC58623.2025.11254538},
pmid = {41336630},
issn = {2694-0604},
mesh = {Spectroscopy, Near-Infrared/methods ; Humans ; Algorithms ; Deep Learning ; Brain-Computer Interfaces ; Signal Processing, Computer-Assisted ; Neural Networks, Computer ; },
abstract = {Functional near-infrared spectroscopy (fNIRS) is a non-invasive functional neuroimaging technique widely employed in brain-computer interface (BCI) research and diverse clinical applications. The key challenge in fNIRS applications lies in extracting nonlinear structures and complex patterns from one-dimensional time series data. Gramian angular difference field (GADF) transforms one-dimensional time series into two-dimensional images, providing effective feature representation for subsequent signal classification. However, most studies have not explored the combined effects of image features and time series features. In this paper, we propose a deep learning model, VisiTempNet, which integrates both time series and GADF image features in a temporal-spatial fusion approach. The model first performs convolution on time series data based on delayed hemodynamic responses to highlight key features. It then separates the feature extraction process into two parallel modules, and normalizes and fuses these features with learnable weights, assigning greater importance to the most relevant information for classification. Experimental results show that our model achieved an accuracy of 76.65±2.43% on the open access fNIRS2MW dataset, outperforming all baseline models. This validates the effectiveness of combining image and time series features and demonstrates the superiority of the proposed model.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Spectroscopy, Near-Infrared/methods
Humans
Algorithms
Deep Learning
Brain-Computer Interfaces
Signal Processing, Computer-Assisted
Neural Networks, Computer
RevDate: 2025-12-03
CmpDate: 2025-12-03
Seasickness Alleviation based on a Mindfulness Brain-Computer Interface.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-6.
Seasickness is a common condition that negatively affects both the experience of passengers and the operating performance of maritime personnel. Techniques aimed at redirecting attention have been proposed to alleviate motion sickness symptoms; however, their effectiveness has not yet been rigorously verified, especially in maritime environments, which present unique challenges due to the prolonged and severe motion conditions. This research introduces a mindfulness brain-computer interface (BCI) specifically designed to redirect attention and alleviate seasickness. The system employs a single-channel headband to record prefrontal electroencephalography (EEG) signals, which are wirelessly transmitted to computing devices for real-time mindfulness assessments. Participants receive feedback in the form of mindfulness scores and audiovisual cues, facilitating a redirection of attention from physical discomfort. In maritime experiments with 43 participants across three sessions, 81.39% reported the BCI's effectiveness, and a substantial reduction in seasickness severity was observed using the Misery Scale (MISC). Together, our work presents the first wearable and nonpharmacological solution for alleviating seasickness, and opens up a brand-new application domain for BCIs.
Additional Links: PMID-41336626
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336626,
year = {2025},
author = {Bao, X and Xu, K and Zhu, J and Huang, H and Li, K and Huang, Q and Li, Y},
title = {Seasickness Alleviation based on a Mindfulness Brain-Computer Interface.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-6},
doi = {10.1109/EMBC58623.2025.11254567},
pmid = {41336626},
issn = {2694-0604},
mesh = {Humans ; *Brain-Computer Interfaces ; Electroencephalography ; Male ; *Mindfulness ; Adult ; *Motion Sickness/therapy/prevention & control/physiopathology ; Female ; Young Adult ; Attention ; },
abstract = {Seasickness is a common condition that negatively affects both the experience of passengers and the operating performance of maritime personnel. Techniques aimed at redirecting attention have been proposed to alleviate motion sickness symptoms; however, their effectiveness has not yet been rigorously verified, especially in maritime environments, which present unique challenges due to the prolonged and severe motion conditions. This research introduces a mindfulness brain-computer interface (BCI) specifically designed to redirect attention and alleviate seasickness. The system employs a single-channel headband to record prefrontal electroencephalography (EEG) signals, which are wirelessly transmitted to computing devices for real-time mindfulness assessments. Participants receive feedback in the form of mindfulness scores and audiovisual cues, facilitating a redirection of attention from physical discomfort. In maritime experiments with 43 participants across three sessions, 81.39% reported the BCI's effectiveness, and a substantial reduction in seasickness severity was observed using the Misery Scale (MISC). Together, our work presents the first wearable and nonpharmacological solution for alleviating seasickness, and opens up a brand-new application domain for BCIs.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Brain-Computer Interfaces
Electroencephalography
Male
*Mindfulness
Adult
*Motion Sickness/therapy/prevention & control/physiopathology
Female
Young Adult
Attention
RevDate: 2025-12-03
CmpDate: 2025-12-03
Gaussian Process-Based Surrogate Models for Optimizing Electrode Configurations in HD-tDCS.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
High-definition transcranial direct current stimulation (HD-tDCS) is a promising noninvasive neurostimulation technique used in therapeutic applications and brain-machine interfaces. It delivers direct current via multiple scalp electrodes, generating targeted electrical fields to modulate specific brain areas. In the context of HD-tDCS, optimizing electrode placements is challenging due to the complexity of brain anatomy and the vast number of possible configurations. While simulation models enable model-based optimization, continuous electrode positioning is generally computationally prohibitive. We propose Gaussian Process (GP)-based framework for optimizing HD-tDCS, allowing continuous prediction of electric field distributions. Unlike traditional leadfield-based methods, which restrict electrode placement, our approach expands the search space for greater precision. We employ a Sparse Gaussian Process (SGP) approximation, optimized using Block-Coordinate Descent and Subset of Data techniques, to efficiently handle large datasets. Results demonstrate that the SGP-based model significantly enhanced focality for superficial and mid-brain regions, achieving performance comparable to leadfield-based methods for deep brain targets. Overall, this framework offers enhanced stimulation precision and flexibility, supporting the advancement of tDCS in research and clinical contexts.
Additional Links: PMID-41336622
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336622,
year = {2025},
author = {Ahmadi, K and Dong, L and Kok, RL and Findeisen, R},
title = {Gaussian Process-Based Surrogate Models for Optimizing Electrode Configurations in HD-tDCS.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11254512},
pmid = {41336622},
issn = {2694-0604},
mesh = {*Transcranial Direct Current Stimulation/instrumentation/methods ; Humans ; Electrodes ; Normal Distribution ; Brain/physiology ; Computer Simulation ; Algorithms ; },
abstract = {High-definition transcranial direct current stimulation (HD-tDCS) is a promising noninvasive neurostimulation technique used in therapeutic applications and brain-machine interfaces. It delivers direct current via multiple scalp electrodes, generating targeted electrical fields to modulate specific brain areas. In the context of HD-tDCS, optimizing electrode placements is challenging due to the complexity of brain anatomy and the vast number of possible configurations. While simulation models enable model-based optimization, continuous electrode positioning is generally computationally prohibitive. We propose Gaussian Process (GP)-based framework for optimizing HD-tDCS, allowing continuous prediction of electric field distributions. Unlike traditional leadfield-based methods, which restrict electrode placement, our approach expands the search space for greater precision. We employ a Sparse Gaussian Process (SGP) approximation, optimized using Block-Coordinate Descent and Subset of Data techniques, to efficiently handle large datasets. Results demonstrate that the SGP-based model significantly enhanced focality for superficial and mid-brain regions, achieving performance comparable to leadfield-based methods for deep brain targets. Overall, this framework offers enhanced stimulation precision and flexibility, supporting the advancement of tDCS in research and clinical contexts.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Transcranial Direct Current Stimulation/instrumentation/methods
Humans
Electrodes
Normal Distribution
Brain/physiology
Computer Simulation
Algorithms
RevDate: 2025-12-03
CmpDate: 2025-12-03
Impact of latency jitter correction on offline P300-based classification: a preliminary study for BCI applications in MCS patients.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-6.
Disorders of Consciousness (DoC) are clinical conditions characterized by different levels of arousal and awareness, including coma, Unresponsive Wakefulness Syndrome and Minimally Conscious State (MCS). A Brain-Computer Interface (BCI) employs brain signals to establish a non-muscular outward channel, representing a key frontier in the clinical care of individuals in MCS, with high potential to enhance communication and quality of life. The P300-based BCIs, which use the P300 ERP as a control signal, are the most investigated to emulate communication in MCS. However, a reliable control by MCS patients of these BCIs still remains matter of question. One major challenge could be the across trials variability of P300 characteristics, possibly related to attentional fluctuations in this population. The trial-by-trial instability of the P300 peak latency, known as latency jitter, negatively impacts classification performance, and an approach to mitigating this issue involves template matching algorithms (e.g. the Adaptive Wavelet Filtering, AWF) which detect and realign the P300 latency at the single-trial level. This study investigated the offline classification performance using Stepwise Linear Discriminant Analysis (SWLDA) models trained with progressively larger training sets, to discriminate target from non-target stimuli during an active auditory oddball paradigm. Performance from raw and jitter-corrected data, collected from a control group and a group of patients diagnosed as MCS, were compared. Results highlighted the key role of latency jitter correction in the enhancement of performance and classification speed.Clinical Relevance- The findings suggest that jitter correction could improve real-world applicability of P300-BCI systems for individuals with DoC.
Additional Links: PMID-41336584
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336584,
year = {2025},
author = {Caracci, V and Riccio, A and D'Ippolito, M and Galiotta, V and Quattrociocchi, I and Formisano, R and Cincotti, F and Toppi, J and Mattia, D},
title = {Impact of latency jitter correction on offline P300-based classification: a preliminary study for BCI applications in MCS patients.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-6},
doi = {10.1109/EMBC58623.2025.11253369},
pmid = {41336584},
issn = {2694-0604},
mesh = {Humans ; *Brain-Computer Interfaces ; *Event-Related Potentials, P300 ; Male ; Female ; Adult ; Electroencephalography/methods ; *Persistent Vegetative State/physiopathology ; Algorithms ; Middle Aged ; },
abstract = {Disorders of Consciousness (DoC) are clinical conditions characterized by different levels of arousal and awareness, including coma, Unresponsive Wakefulness Syndrome and Minimally Conscious State (MCS). A Brain-Computer Interface (BCI) employs brain signals to establish a non-muscular outward channel, representing a key frontier in the clinical care of individuals in MCS, with high potential to enhance communication and quality of life. The P300-based BCIs, which use the P300 ERP as a control signal, are the most investigated to emulate communication in MCS. However, a reliable control by MCS patients of these BCIs still remains matter of question. One major challenge could be the across trials variability of P300 characteristics, possibly related to attentional fluctuations in this population. The trial-by-trial instability of the P300 peak latency, known as latency jitter, negatively impacts classification performance, and an approach to mitigating this issue involves template matching algorithms (e.g. the Adaptive Wavelet Filtering, AWF) which detect and realign the P300 latency at the single-trial level. This study investigated the offline classification performance using Stepwise Linear Discriminant Analysis (SWLDA) models trained with progressively larger training sets, to discriminate target from non-target stimuli during an active auditory oddball paradigm. Performance from raw and jitter-corrected data, collected from a control group and a group of patients diagnosed as MCS, were compared. Results highlighted the key role of latency jitter correction in the enhancement of performance and classification speed.Clinical Relevance- The findings suggest that jitter correction could improve real-world applicability of P300-BCI systems for individuals with DoC.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Brain-Computer Interfaces
*Event-Related Potentials, P300
Male
Female
Adult
Electroencephalography/methods
*Persistent Vegetative State/physiopathology
Algorithms
Middle Aged
RevDate: 2025-12-03
CmpDate: 2025-12-03
Neural Strategies for Upper Limb Movements: Motor Unit Control during Dynamic Contractions at Increasing Speeds.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
Understanding motor unit (MU) behavior in dynamic movements remains a critical gap in neuro-rehabilitation, prosthetics, and human-machine interfaces (HMI). While machine learning applied to surface electromyography (sEMG) enables movement classification, it provides little insight into neural control, limiting the development of more precise and adaptive assistive technologies. Recent studies have demonstrated that MU activity can be accurately extracted using high-density sEMG decomposition under isometric conditions. However, extracting and tracking MUs during dynamic tasks remains challenging due to signal non-stationarity caused by changes in muscle length. This study investigates MU control in the forearm flexor muscles across different contraction velocities (5°/s, 10°/s, 20°/s) and force levels (15% and 25% of the maximum voluntary contraction [MVC]). We investigate whether increases in movement velocity are primarily achieved through MU recruitment strategies or by adjusting the discharge rates of already-recruited units. Our findings show that MU control in the upper limb follows a velocity-dependent modulation pattern (p-value < 0.05), favoring discharge rate adjustments over additional MUs recruitment at higher speeds. We also validate the feasibility of MU tracking in dynamic conditions, opening new opportunities for neurotechnology applications such as HMI.
Additional Links: PMID-41336583
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336583,
year = {2025},
author = {Orlandi, M and Rapa, PM and Baracat, F and Benini, L and Donati, E and Benatti, S},
title = {Neural Strategies for Upper Limb Movements: Motor Unit Control during Dynamic Contractions at Increasing Speeds.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11253409},
pmid = {41336583},
issn = {2694-0604},
mesh = {Humans ; Electromyography ; *Upper Extremity/physiology ; Male ; Movement/physiology ; *Muscle Contraction/physiology ; Adult ; *Motor Neurons/physiology ; Muscle, Skeletal/physiology ; Female ; Young Adult ; },
abstract = {Understanding motor unit (MU) behavior in dynamic movements remains a critical gap in neuro-rehabilitation, prosthetics, and human-machine interfaces (HMI). While machine learning applied to surface electromyography (sEMG) enables movement classification, it provides little insight into neural control, limiting the development of more precise and adaptive assistive technologies. Recent studies have demonstrated that MU activity can be accurately extracted using high-density sEMG decomposition under isometric conditions. However, extracting and tracking MUs during dynamic tasks remains challenging due to signal non-stationarity caused by changes in muscle length. This study investigates MU control in the forearm flexor muscles across different contraction velocities (5°/s, 10°/s, 20°/s) and force levels (15% and 25% of the maximum voluntary contraction [MVC]). We investigate whether increases in movement velocity are primarily achieved through MU recruitment strategies or by adjusting the discharge rates of already-recruited units. Our findings show that MU control in the upper limb follows a velocity-dependent modulation pattern (p-value < 0.05), favoring discharge rate adjustments over additional MUs recruitment at higher speeds. We also validate the feasibility of MU tracking in dynamic conditions, opening new opportunities for neurotechnology applications such as HMI.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
Electromyography
*Upper Extremity/physiology
Male
Movement/physiology
*Muscle Contraction/physiology
Adult
*Motor Neurons/physiology
Muscle, Skeletal/physiology
Female
Young Adult
RevDate: 2025-12-03
CmpDate: 2025-12-03
SSL-SE-EEG: A Framework for Robust Learning from Unlabeled EEG Data with Self-Supervised Learning and Squeeze-Excitation Networks.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
Electroencephalography (EEG) plays a crucial role in brain-computer interfaces (BCIs) and neurological diagnostics, but its real-world deployment faces challenges due to noise artifacts, missing data, and high annotation costs. We introduce SSL-SE-EEG, a framework that integrates Self-Supervised Learning (SSL) with Squeeze-and-Excitation Networks (SE-Nets) to enhance feature extraction, improve noise robustness, and reduce reliance on labeled data. Unlike conventional EEG processing techniques, SSL-SE-EEG transforms EEG signals into structured 2D image representations, suitable for deep learning. Experimental validation on MindBigData, TUH-AB, SEED-IV and BCI-IV datasets demonstrates state-of-the-art accuracy (91% in MindBigData, 85% in TUH-AB), making it well-suited for real-time BCI applications. By enabling low-power, scalable EEG processing, SSL-SE-EEG presents a promising solution for biomedical signal analysis, neural engineering, and next-generation BCIs. The code is available at https://github.com/roycmeghna/SSL_SE_EEG_EMBC25.
Additional Links: PMID-41336567
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336567,
year = {2025},
author = {Roy Chowdhury, M and Ding, Y and Sen, S},
title = {SSL-SE-EEG: A Framework for Robust Learning from Unlabeled EEG Data with Self-Supervised Learning and Squeeze-Excitation Networks.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11253365},
pmid = {41336567},
issn = {2694-0604},
mesh = {*Electroencephalography/methods ; Humans ; Brain-Computer Interfaces ; *Supervised Machine Learning ; *Signal Processing, Computer-Assisted ; Algorithms ; Deep Learning ; Neural Networks, Computer ; },
abstract = {Electroencephalography (EEG) plays a crucial role in brain-computer interfaces (BCIs) and neurological diagnostics, but its real-world deployment faces challenges due to noise artifacts, missing data, and high annotation costs. We introduce SSL-SE-EEG, a framework that integrates Self-Supervised Learning (SSL) with Squeeze-and-Excitation Networks (SE-Nets) to enhance feature extraction, improve noise robustness, and reduce reliance on labeled data. Unlike conventional EEG processing techniques, SSL-SE-EEG transforms EEG signals into structured 2D image representations, suitable for deep learning. Experimental validation on MindBigData, TUH-AB, SEED-IV and BCI-IV datasets demonstrates state-of-the-art accuracy (91% in MindBigData, 85% in TUH-AB), making it well-suited for real-time BCI applications. By enabling low-power, scalable EEG processing, SSL-SE-EEG presents a promising solution for biomedical signal analysis, neural engineering, and next-generation BCIs. The code is available at https://github.com/roycmeghna/SSL_SE_EEG_EMBC25.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Electroencephalography/methods
Humans
Brain-Computer Interfaces
*Supervised Machine Learning
*Signal Processing, Computer-Assisted
Algorithms
Deep Learning
Neural Networks, Computer
RevDate: 2025-12-03
CmpDate: 2025-12-03
Automatic Blink-Based Bad EEG channels Detection for BCI Applications.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
In Brain-Computer Interface (BCI) applications, noise presents a persistent challenge, often compromising the quality of EEG signals essential for accurate data interpretation. This paper focuses on optimizing the signal-to-noise ratio (SNR) to improve BCI performance, with channel selection being a key method for achieving this enhancement. The Eye-Bci multimodal dataset is used to address the issue of detecting and eliminating faulty EEG channels caused by non-biological artifacts, such as malfunctioning electrodes and power line interference. The core of this research is the automatic detection of problematic channels through the Adaptive Blink-Correction and DeDrifting (ABCD) algorithm. This method utilizes blink propagation patterns to identify channels affected by artifacts or malfunctions. Additionally, segmented SNR topographies and source localization plots are employed to illustrate the impact of channel removal by comparing Left and Right hand grasp Motor Imagery (MI). Classification accuracy further supports the value of the ABCD algorithm, reaching an average classification accuracy of 93.81% [74.81%; 98.76%] (confidence interval at 95% confidence level) across 31 subjects (63 sessions), significantly surpassing traditional methods such as Independent Component Analysis (ICA) (79.29% [57.41%; 92.89%]) and Artifact Subspace Reconstruction (ASR) (84.05% [62.88%; 95.31%]). These results underscore the critical role of channel selection and the potential of using blink patterns for detecting bad EEG channels, offering valuable insights for improving real-time or offline BCI systems by reducing noise and enhancing signal quality.
Additional Links: PMID-41336566
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336566,
year = {2025},
author = {Guttmann-Flury, E and Wei, Y and Zhao, S},
title = {Automatic Blink-Based Bad EEG channels Detection for BCI Applications.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11253420},
pmid = {41336566},
issn = {2694-0604},
mesh = {*Brain-Computer Interfaces ; Humans ; *Electroencephalography/methods ; *Blinking/physiology ; Algorithms ; Signal Processing, Computer-Assisted ; Signal-To-Noise Ratio ; Artifacts ; Adult ; Male ; },
abstract = {In Brain-Computer Interface (BCI) applications, noise presents a persistent challenge, often compromising the quality of EEG signals essential for accurate data interpretation. This paper focuses on optimizing the signal-to-noise ratio (SNR) to improve BCI performance, with channel selection being a key method for achieving this enhancement. The Eye-Bci multimodal dataset is used to address the issue of detecting and eliminating faulty EEG channels caused by non-biological artifacts, such as malfunctioning electrodes and power line interference. The core of this research is the automatic detection of problematic channels through the Adaptive Blink-Correction and DeDrifting (ABCD) algorithm. This method utilizes blink propagation patterns to identify channels affected by artifacts or malfunctions. Additionally, segmented SNR topographies and source localization plots are employed to illustrate the impact of channel removal by comparing Left and Right hand grasp Motor Imagery (MI). Classification accuracy further supports the value of the ABCD algorithm, reaching an average classification accuracy of 93.81% [74.81%; 98.76%] (confidence interval at 95% confidence level) across 31 subjects (63 sessions), significantly surpassing traditional methods such as Independent Component Analysis (ICA) (79.29% [57.41%; 92.89%]) and Artifact Subspace Reconstruction (ASR) (84.05% [62.88%; 95.31%]). These results underscore the critical role of channel selection and the potential of using blink patterns for detecting bad EEG channels, offering valuable insights for improving real-time or offline BCI systems by reducing noise and enhancing signal quality.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Brain-Computer Interfaces
Humans
*Electroencephalography/methods
*Blinking/physiology
Algorithms
Signal Processing, Computer-Assisted
Signal-To-Noise Ratio
Artifacts
Adult
Male
RevDate: 2025-12-03
CmpDate: 2025-12-03
High-Speed Neural Signal Inferencing for Handwritten Character Recognition on a Portable Hardware Device.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-4.
Brain-computer interfaces (BCIs) hold immense potential in assisting individuals with severe motor and communication disabilities by enabling neural signal-based activity recognition, such as handwriting. This study presents the very first implementation of neural signal inference on a portable hardware device, facilitating efficient handwritten character recognition on resource-constrained platforms. Neural signals from a publicly available dataset are processed into neural spike-event data, facilitating the classification of 31 handwritten characters on an NVIDIA Jetson TX2. To enhance model generalization and mitigate overfitting, random noise injection and time-shifting-based data augmentation techniques are applied. The proposed approach utilizes EfficientNetB0 with neural spikes, and achieves 99.17% test accuracy, significantly outperforming previous model results. During high-speed inference, EfficientNetB0 achieved a Word Error Rate (WER) of 0.96% and a Character Error Rate (CER) of 0.2%, with a character decoding latency of 37.5 milliseconds on the Jetson TX2 while processing 100 sentences used in daily life. These results validate the feasibility of accurate high-speed neural decoding on portable edge hardware, highlighting the impact of lightweight machine learning models in BCI applications.
Additional Links: PMID-41336553
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336553,
year = {2025},
author = {Sen, O and Khalifa, A and Chatterjee, B},
title = {High-Speed Neural Signal Inferencing for Handwritten Character Recognition on a Portable Hardware Device.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-4},
doi = {10.1109/EMBC58623.2025.11253375},
pmid = {41336553},
issn = {2694-0604},
mesh = {Humans ; *Brain-Computer Interfaces ; *Handwriting ; *Signal Processing, Computer-Assisted/instrumentation ; Algorithms ; },
abstract = {Brain-computer interfaces (BCIs) hold immense potential in assisting individuals with severe motor and communication disabilities by enabling neural signal-based activity recognition, such as handwriting. This study presents the very first implementation of neural signal inference on a portable hardware device, facilitating efficient handwritten character recognition on resource-constrained platforms. Neural signals from a publicly available dataset are processed into neural spike-event data, facilitating the classification of 31 handwritten characters on an NVIDIA Jetson TX2. To enhance model generalization and mitigate overfitting, random noise injection and time-shifting-based data augmentation techniques are applied. The proposed approach utilizes EfficientNetB0 with neural spikes, and achieves 99.17% test accuracy, significantly outperforming previous model results. During high-speed inference, EfficientNetB0 achieved a Word Error Rate (WER) of 0.96% and a Character Error Rate (CER) of 0.2%, with a character decoding latency of 37.5 milliseconds on the Jetson TX2 while processing 100 sentences used in daily life. These results validate the feasibility of accurate high-speed neural decoding on portable edge hardware, highlighting the impact of lightweight machine learning models in BCI applications.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Brain-Computer Interfaces
*Handwriting
*Signal Processing, Computer-Assisted/instrumentation
Algorithms
RevDate: 2025-12-03
CmpDate: 2025-12-03
EEG features and suitable decoding algorithm of RSVP-based brain-computer interface in continuous scenes.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-4.
Brain-computer interface (BCI) based on rapid serial visual presentation (RSVP) hold significant value for achieving robust target detection through the integration of human and machine. RSVP in continuous scenes presents video materials and is thus much closer to real-world applications, which greatly exceeds traditional discrete-scene RSVP in terms of practicality. However, the similarities and differences in electroencephalography (EEG) features between continuous and discrete scenes have not yet been clearly clarified. And there is a lack of research on decoding algorithms that are more suitable for continuous scenes, which seriously hinders the development of continuous-scene target detection. To solve these problems, this study designed a comparative experiment based on RSVP paradigm in continuous and discrete scenes. Event-related potential (ERP), event-related spectral perturbation (ERSP), and inter-trial coherence (ITC) were used to investigate EEG features induced by distinct scenes. Further, this study used sliding hierarchical discriminant component analysis (sHDCA), shrinkage discriminative canonical pattern matching (SKDCPM) and attention-based temporal convolutional network (ATCNet) to implement target/non-target trial classification. Consequently, continuous scenes exhibited fewer induced ERP components, a shorter latency of P300, and reduced neural oscillation activities in alpha and beta1 bands over the occipital region within 0~0.2s. As for classification, traditional machine learning algorithms obtained significantly lower accuracy in continuous scenes. While ATCNet achieved the best and same level of accuracy in both scenes, indicating its suitability for decoding continuous-scene RSVP. The results contributed to develop more practical RSVP-BCI target detection systems.
Additional Links: PMID-41336530
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336530,
year = {2025},
author = {Li, S and Yang, M and Sun, J and Sun, J and Yu, G and Lin, L and Meng, J and Xu, M},
title = {EEG features and suitable decoding algorithm of RSVP-based brain-computer interface in continuous scenes.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-4},
doi = {10.1109/EMBC58623.2025.11251802},
pmid = {41336530},
issn = {2694-0604},
mesh = {*Brain-Computer Interfaces ; Humans ; *Electroencephalography/methods ; *Algorithms ; Male ; Signal Processing, Computer-Assisted ; Adult ; Female ; Evoked Potentials ; Young Adult ; },
abstract = {Brain-computer interface (BCI) based on rapid serial visual presentation (RSVP) hold significant value for achieving robust target detection through the integration of human and machine. RSVP in continuous scenes presents video materials and is thus much closer to real-world applications, which greatly exceeds traditional discrete-scene RSVP in terms of practicality. However, the similarities and differences in electroencephalography (EEG) features between continuous and discrete scenes have not yet been clearly clarified. And there is a lack of research on decoding algorithms that are more suitable for continuous scenes, which seriously hinders the development of continuous-scene target detection. To solve these problems, this study designed a comparative experiment based on RSVP paradigm in continuous and discrete scenes. Event-related potential (ERP), event-related spectral perturbation (ERSP), and inter-trial coherence (ITC) were used to investigate EEG features induced by distinct scenes. Further, this study used sliding hierarchical discriminant component analysis (sHDCA), shrinkage discriminative canonical pattern matching (SKDCPM) and attention-based temporal convolutional network (ATCNet) to implement target/non-target trial classification. Consequently, continuous scenes exhibited fewer induced ERP components, a shorter latency of P300, and reduced neural oscillation activities in alpha and beta1 bands over the occipital region within 0~0.2s. As for classification, traditional machine learning algorithms obtained significantly lower accuracy in continuous scenes. While ATCNet achieved the best and same level of accuracy in both scenes, indicating its suitability for decoding continuous-scene RSVP. The results contributed to develop more practical RSVP-BCI target detection systems.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Brain-Computer Interfaces
Humans
*Electroencephalography/methods
*Algorithms
Male
Signal Processing, Computer-Assisted
Adult
Female
Evoked Potentials
Young Adult
RevDate: 2025-12-03
CmpDate: 2025-12-03
Extracting Preserved Neural Latent Dynamics Across Tasks using Convolutional Transformer-based Variational Autoendecoder.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-4.
Understanding how neural systems drive behavior is a fundamental goal in neuroscience. Numerous studies have demonstrated that the activity of large neural populations is often governed by low-dimensional neural dynamics. While much of the current research has focused on extracting informative and interpretable latent dynamics from individual motor tasks, it remains unclear whether these dynamics are preserved across different motor tasks. This question is particularly critical, as prior experience with a related task can facilitate faster learning in a new task. In this paper, we propose a Convolutional Transformer-based Variational Autoencoder (Conformer-VAE) to extract preserved neural latent dynamics across tasks by leveraging the rich spatiotemporal patterns in neural activity. We validate our approach using neural recordings from a rat, which first performed a one-lever pressing task (old task) and subsequently a two-lever discrimination task (new task). By projecting the inferred latent dynamics from both tasks onto a common 2D PCA plane, our results demonstrate that Conformer-VAE effectively captures preserved neural dynamics across tasks, outperforming baseline methods. Moreover, these preserved dynamics enable faster decoder training for the new task by transferring the neural-to-movement mapping learned from the old task. This capability facilitates seamless real-time task switching, offering promising applications for brain-machine interface systems.Clinical Relevance-This work facilitates faster adaptation in brain-machine interfaces by preserving neural dynamics across tasks, offering potential benefits for neuroprosthetics and motor rehabilitation in patients with motor impairments.
Additional Links: PMID-41336489
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336489,
year = {2025},
author = {Song, Z and Wu, S and Zhou, T and Wang, Y},
title = {Extracting Preserved Neural Latent Dynamics Across Tasks using Convolutional Transformer-based Variational Autoendecoder.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-4},
doi = {10.1109/EMBC58623.2025.11251780},
pmid = {41336489},
issn = {2694-0604},
mesh = {Rats ; Animals ; Algorithms ; *Neurons/physiology ; *Neural Networks, Computer ; Brain-Computer Interfaces ; },
abstract = {Understanding how neural systems drive behavior is a fundamental goal in neuroscience. Numerous studies have demonstrated that the activity of large neural populations is often governed by low-dimensional neural dynamics. While much of the current research has focused on extracting informative and interpretable latent dynamics from individual motor tasks, it remains unclear whether these dynamics are preserved across different motor tasks. This question is particularly critical, as prior experience with a related task can facilitate faster learning in a new task. In this paper, we propose a Convolutional Transformer-based Variational Autoencoder (Conformer-VAE) to extract preserved neural latent dynamics across tasks by leveraging the rich spatiotemporal patterns in neural activity. We validate our approach using neural recordings from a rat, which first performed a one-lever pressing task (old task) and subsequently a two-lever discrimination task (new task). By projecting the inferred latent dynamics from both tasks onto a common 2D PCA plane, our results demonstrate that Conformer-VAE effectively captures preserved neural dynamics across tasks, outperforming baseline methods. Moreover, these preserved dynamics enable faster decoder training for the new task by transferring the neural-to-movement mapping learned from the old task. This capability facilitates seamless real-time task switching, offering promising applications for brain-machine interface systems.Clinical Relevance-This work facilitates faster adaptation in brain-machine interfaces by preserving neural dynamics across tasks, offering potential benefits for neuroprosthetics and motor rehabilitation in patients with motor impairments.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Rats
Animals
Algorithms
*Neurons/physiology
*Neural Networks, Computer
Brain-Computer Interfaces
RevDate: 2025-12-03
CmpDate: 2025-12-03
Validation of a Novel Protocol for Whole-Sentence Imagined Speech Acquisition: Advancing Brain-Computer Interface Applications.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-4.
This study aims to validate a novel protocol for whole-sentence imagined speech acquisition, building upon and addressing limitations of a previous single-word acquisition protocol. Eight participants (gender-balanced, mean age 21.3±6 years) were recruited for this study. Participant attention indices, and session variations were evaluated across multiple sessions. The protocol successfully maintains participant engagement while effectively stimulating language imagination processes. The neurophysiological findings, particularly the activation patterns in specific frequency bands and cortical regions, align well with established literature on imagined speech processing. The enhanced delta band activation observed during second sessions, associated with memory mechanisms, provides valuable insight into the cognitive processes involved in repeated imagined speech tasks. These findings contribute to the broader field of Brain Computer Interface (BCI) development and suggest potential applications in clinical settings, particularly for individuals with speech impairments.
Additional Links: PMID-41336487
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336487,
year = {2025},
author = {Iacomi, F and Tiberio, P and Tonon, T and Perugini, S and Farabbi, A and Barbieri, R and Mainardi, L},
title = {Validation of a Novel Protocol for Whole-Sentence Imagined Speech Acquisition: Advancing Brain-Computer Interface Applications.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-4},
doi = {10.1109/EMBC58623.2025.11251720},
pmid = {41336487},
issn = {2694-0604},
mesh = {Humans ; *Brain-Computer Interfaces ; *Speech/physiology ; Male ; Female ; *Imagination/physiology ; Young Adult ; Adult ; Electroencephalography/methods ; },
abstract = {This study aims to validate a novel protocol for whole-sentence imagined speech acquisition, building upon and addressing limitations of a previous single-word acquisition protocol. Eight participants (gender-balanced, mean age 21.3±6 years) were recruited for this study. Participant attention indices, and session variations were evaluated across multiple sessions. The protocol successfully maintains participant engagement while effectively stimulating language imagination processes. The neurophysiological findings, particularly the activation patterns in specific frequency bands and cortical regions, align well with established literature on imagined speech processing. The enhanced delta band activation observed during second sessions, associated with memory mechanisms, provides valuable insight into the cognitive processes involved in repeated imagined speech tasks. These findings contribute to the broader field of Brain Computer Interface (BCI) development and suggest potential applications in clinical settings, particularly for individuals with speech impairments.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Brain-Computer Interfaces
*Speech/physiology
Male
Female
*Imagination/physiology
Young Adult
Adult
Electroencephalography/methods
RevDate: 2025-12-03
CmpDate: 2025-12-03
Enhancing EEG Classification for Motor Imagery Control of a VR Game based on Deep Learning Techniques on Small Datasets.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
Motor imagery-based Brain-Computer Interfaces (BCIs) suffer from limited accuracy when the EEG dataset is recorded from naive BCI users due to noisy components. Neural networks capture more robust representations of EEG features, but require large amount of data which is challenging to collect, due to long motor imagery training sessions. On the other hand, linear- and Riemann-based machine learning algorithms achieve above chance-level accuracy on small scale datasets, but, performance degrades on noisy datasets. To address this issue, we implemented a Wasserstein Generative Adversarial Network (WGAN) for data augmentation to prevent overfitting for the deep classifier, while reaching training convergence faster than existing models. For classification, we developed a Convolutional Neural Network (CNN) to eliminate noisy components caused by BCI illiteracy and extract robust temporal representations of EEG features. To evaluate our system, we designed a VR maze game utilizing the proposed BCI system to translate the EEG signal into movement for a playable character. We achieve increased accuracy, compared to conventional machine learning models, with minimal overfitting, on our own dataset, recorded from 16 naive BCI users.
Additional Links: PMID-41336480
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336480,
year = {2025},
author = {Ramiotis, G and Mania, K},
title = {Enhancing EEG Classification for Motor Imagery Control of a VR Game based on Deep Learning Techniques on Small Datasets.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11251707},
pmid = {41336480},
issn = {2694-0604},
mesh = {*Electroencephalography/methods ; Humans ; *Brain-Computer Interfaces ; *Deep Learning ; Algorithms ; *Virtual Reality ; Neural Networks, Computer ; *Video Games ; Signal Processing, Computer-Assisted ; *Imagination ; Movement ; },
abstract = {Motor imagery-based Brain-Computer Interfaces (BCIs) suffer from limited accuracy when the EEG dataset is recorded from naive BCI users due to noisy components. Neural networks capture more robust representations of EEG features, but require large amount of data which is challenging to collect, due to long motor imagery training sessions. On the other hand, linear- and Riemann-based machine learning algorithms achieve above chance-level accuracy on small scale datasets, but, performance degrades on noisy datasets. To address this issue, we implemented a Wasserstein Generative Adversarial Network (WGAN) for data augmentation to prevent overfitting for the deep classifier, while reaching training convergence faster than existing models. For classification, we developed a Convolutional Neural Network (CNN) to eliminate noisy components caused by BCI illiteracy and extract robust temporal representations of EEG features. To evaluate our system, we designed a VR maze game utilizing the proposed BCI system to translate the EEG signal into movement for a playable character. We achieve increased accuracy, compared to conventional machine learning models, with minimal overfitting, on our own dataset, recorded from 16 naive BCI users.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Electroencephalography/methods
Humans
*Brain-Computer Interfaces
*Deep Learning
Algorithms
*Virtual Reality
Neural Networks, Computer
*Video Games
Signal Processing, Computer-Assisted
*Imagination
Movement
RevDate: 2025-12-03
CmpDate: 2025-12-03
Effect of Electrode Reduction on the Error-Related Potential Detection During the Start of the Gait.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-5.
Self-correcting Brain-Machine Interfaces based on Motor Imagery (MI-BMIs) using Error-Related Potentials (ErrP) are a promising approach to improve the accuracy of the system and enhancing their feasibility for the neurorehabilitation of patients with spinal cord injuries (SCI). However, these technologies require extensive preparation time, which shortens the therapy session and causes fatigue in the patient even before starting, potentially reducing the therapy's effectiveness. To address this issue, this study evaluates five electrode configurations to determine the impact of electrode reduction on ErrP detection at the beginning of the gait with a lower-limb exoskeleton. The results indicate that reducing the number of electrodes does not significantly affect detection performance but does reduce false positive rates (FPR). Therefore, these findings support the feasibility of using a reduced electrode configuration of 11 electrodes to enhance BMI usability while maintaining detection reliability.Clinical relevance- The long preparation time required for MI-BMI therapies poses a significant challenge. As a result, patients may begin therapy fatigued or experience rapid exhaustion, limiting their engagement in the rehabilitation process. To address this issue, this study explores electrode reduction for ErrP detection as a strategy to minimize preparation time, enhancing the feasibility of MI-BMIs for clinical applications.
Additional Links: PMID-41336461
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336461,
year = {2025},
author = {Soriano-Segura, P and Quiles, V and Ortiz, M and Ianez, E and Azorin, JM},
title = {Effect of Electrode Reduction on the Error-Related Potential Detection During the Start of the Gait.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-5},
doi = {10.1109/EMBC58623.2025.11251757},
pmid = {41336461},
issn = {2694-0604},
mesh = {Humans ; *Gait/physiology ; Electrodes ; *Brain-Computer Interfaces ; Male ; Adult ; },
abstract = {Self-correcting Brain-Machine Interfaces based on Motor Imagery (MI-BMIs) using Error-Related Potentials (ErrP) are a promising approach to improve the accuracy of the system and enhancing their feasibility for the neurorehabilitation of patients with spinal cord injuries (SCI). However, these technologies require extensive preparation time, which shortens the therapy session and causes fatigue in the patient even before starting, potentially reducing the therapy's effectiveness. To address this issue, this study evaluates five electrode configurations to determine the impact of electrode reduction on ErrP detection at the beginning of the gait with a lower-limb exoskeleton. The results indicate that reducing the number of electrodes does not significantly affect detection performance but does reduce false positive rates (FPR). Therefore, these findings support the feasibility of using a reduced electrode configuration of 11 electrodes to enhance BMI usability while maintaining detection reliability.Clinical relevance- The long preparation time required for MI-BMI therapies poses a significant challenge. As a result, patients may begin therapy fatigued or experience rapid exhaustion, limiting their engagement in the rehabilitation process. To address this issue, this study explores electrode reduction for ErrP detection as a strategy to minimize preparation time, enhancing the feasibility of MI-BMIs for clinical applications.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Gait/physiology
Electrodes
*Brain-Computer Interfaces
Male
Adult
RevDate: 2025-12-03
CmpDate: 2025-12-03
EEG-based Syllable-Level Voice Activity Detection.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-4.
Speech brain-computer interface (BCI), as an ideal means to achieve direct communication between the brain and the outside world, has become a research area of great interest. This work studied syllable-level voice activity detection (VAD) based on electroencephalogram (EEG) signals to help identify the presence or absence of speech-related EEG activity. We utilized EEG signals from 10 participants performing auditory (listening to stimuli) and speech (pronouncing syllables) tasks to measure brain activity. Speech-Based VAD was employed to label the auditory stimuli and voice recordings, generating corresponding brain activity labels, which were then used to classify resting and active (listening or pronouncing) EEG states, respectively. The experimental results showed that the EEG-based VAD model achieved accuracies of 90.93% and 69.57% for the speech production and auditory speech tasks, respectively. The accuracies were lower in the cross-subject classification, with accuracies of 72.63% and 61.15% for the two tasks. Additionally, the experiment further compared the model's performance under different time window conditions, but no significant correlation was found between window length and classification accuracy. This study provided new insights into the application of EEG based speech decoding, particularly in future self-paced speech BCI applications.
Additional Links: PMID-41336460
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336460,
year = {2025},
author = {Wang, X and Lai, YH and Chen, F},
title = {EEG-based Syllable-Level Voice Activity Detection.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-4},
doi = {10.1109/EMBC58623.2025.11251715},
pmid = {41336460},
issn = {2694-0604},
mesh = {Humans ; *Electroencephalography/methods ; *Brain-Computer Interfaces ; Male ; *Voice/physiology ; Adult ; Female ; *Speech/physiology ; Signal Processing, Computer-Assisted ; Young Adult ; },
abstract = {Speech brain-computer interface (BCI), as an ideal means to achieve direct communication between the brain and the outside world, has become a research area of great interest. This work studied syllable-level voice activity detection (VAD) based on electroencephalogram (EEG) signals to help identify the presence or absence of speech-related EEG activity. We utilized EEG signals from 10 participants performing auditory (listening to stimuli) and speech (pronouncing syllables) tasks to measure brain activity. Speech-Based VAD was employed to label the auditory stimuli and voice recordings, generating corresponding brain activity labels, which were then used to classify resting and active (listening or pronouncing) EEG states, respectively. The experimental results showed that the EEG-based VAD model achieved accuracies of 90.93% and 69.57% for the speech production and auditory speech tasks, respectively. The accuracies were lower in the cross-subject classification, with accuracies of 72.63% and 61.15% for the two tasks. Additionally, the experiment further compared the model's performance under different time window conditions, but no significant correlation was found between window length and classification accuracy. This study provided new insights into the application of EEG based speech decoding, particularly in future self-paced speech BCI applications.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Electroencephalography/methods
*Brain-Computer Interfaces
Male
*Voice/physiology
Adult
Female
*Speech/physiology
Signal Processing, Computer-Assisted
Young Adult
RevDate: 2025-12-03
CmpDate: 2025-12-03
A More Rational and Efficient Kalman Filter Design for Motor Brain-Machine Interfaces.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-5.
The Kalman Filter has long been one of the most widely used models in motor brain-machine interface (BMI) research due to its noise handling capabilities and real-time adaptability. However, as a model originally developed for traditional control systems, its underlying assumptions of Markov property and the designs of observation models may not always hold true in the context of BMI applications, potentially leading to oversimplifications. This paper examines the limitations that arise when applying the Kalman Filter to BMI, and proposes the Dilated Kalman Filter, which performs Gaussian multiplication between state transition distribution and observation-mapped state distribution in state space, thereby combining observation noise with BMI-specific observation model noise, and consequently incorporates historical information from both states and observations. The proposed method improves the accuracy of Kalman Filter while significantly enhancing computational efficiency, particularly when processing data from large numbers of neurons.
Additional Links: PMID-41336448
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336448,
year = {2025},
author = {Liu, G and Yan, Y and Cai, J and Cheok, AD and Qi Wu, E and Song, A},
title = {A More Rational and Efficient Kalman Filter Design for Motor Brain-Machine Interfaces.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-5},
doi = {10.1109/EMBC58623.2025.11251710},
pmid = {41336448},
issn = {2694-0604},
mesh = {*Brain-Computer Interfaces ; Humans ; Algorithms ; },
abstract = {The Kalman Filter has long been one of the most widely used models in motor brain-machine interface (BMI) research due to its noise handling capabilities and real-time adaptability. However, as a model originally developed for traditional control systems, its underlying assumptions of Markov property and the designs of observation models may not always hold true in the context of BMI applications, potentially leading to oversimplifications. This paper examines the limitations that arise when applying the Kalman Filter to BMI, and proposes the Dilated Kalman Filter, which performs Gaussian multiplication between state transition distribution and observation-mapped state distribution in state space, thereby combining observation noise with BMI-specific observation model noise, and consequently incorporates historical information from both states and observations. The proposed method improves the accuracy of Kalman Filter while significantly enhancing computational efficiency, particularly when processing data from large numbers of neurons.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Brain-Computer Interfaces
Humans
Algorithms
RevDate: 2025-12-03
CmpDate: 2025-12-03
Regularization SAME Method can Enhance the Performance of SSVEP-BCI with Very Weak Stimulation.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-4.
The steady-state visual evoked potential-based brain-computer interface (SSVEP-BCI) has gained considerable attention due to its high information transfer rate (ITR) and stable performance. However, the comfort of SSVEP-BCI still needs to be improved, as strong flickering stimuli cause users' visual fatigue. Reducing the pixel density of the stimuli has been demonstrated as an effective method to improve its comfort. However, the signal-to-noise rate (SNR) of the SSVEP signal induced by such very weak stimuli is low, posing challenges for their decoding. Therefore, it is necessary to develop suitable strategy for better decoding the SSVEP induced by very weak stimuli. This study employed the source aliasing matrix estimation (SAME) method to enlarge the dataset and improve decoding accuracy for SSVEP induced by low-pixel density stimuli. Additionally, this study further optimized the SAME with a regularization method to achieve much higher decoding performance. A SSVEP experiment was designed with various pixel densities (100%, 90%, 80%, 70%, 60%, 50%, 40%, 30%, 20%, 10% and 1%) and frequencies (low: 7Hz, 11Hz, and 15Hz; mid-to-high: 23Hz, 31Hz, and 39Hz) to verify our methods. The results indicated SAME significantly improved the classification accuracy compared to traditional method without the SAME, especially under very weak stimulation conditions (pixel densities ≤ 50%), with the maximum increase reaching 8.6%. Besides, regularization SAME further yielded a significant enhancement, achieved maximum improvements of 4.29% compared to SAME. The regularization SAME proposed in this study significantly improves SSVEP decoding performance under low-pixel density stimuli, paving the way for the development of comfortable and effective SSVEP-BCI.
Additional Links: PMID-41336444
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336444,
year = {2025},
author = {Lin, L and Lin, J and Pu, Q and Zhou, H and Wang, H and Sun, J and Luo, R and Yu, G and Meng, L and He, F and Meng, J and Xu, M},
title = {Regularization SAME Method can Enhance the Performance of SSVEP-BCI with Very Weak Stimulation.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-4},
doi = {10.1109/EMBC58623.2025.11251722},
pmid = {41336444},
issn = {2694-0604},
mesh = {*Brain-Computer Interfaces ; Humans ; *Evoked Potentials, Visual/physiology ; Electroencephalography/methods ; Algorithms ; Male ; Signal-To-Noise Ratio ; Photic Stimulation ; Adult ; Signal Processing, Computer-Assisted ; Female ; },
abstract = {The steady-state visual evoked potential-based brain-computer interface (SSVEP-BCI) has gained considerable attention due to its high information transfer rate (ITR) and stable performance. However, the comfort of SSVEP-BCI still needs to be improved, as strong flickering stimuli cause users' visual fatigue. Reducing the pixel density of the stimuli has been demonstrated as an effective method to improve its comfort. However, the signal-to-noise rate (SNR) of the SSVEP signal induced by such very weak stimuli is low, posing challenges for their decoding. Therefore, it is necessary to develop suitable strategy for better decoding the SSVEP induced by very weak stimuli. This study employed the source aliasing matrix estimation (SAME) method to enlarge the dataset and improve decoding accuracy for SSVEP induced by low-pixel density stimuli. Additionally, this study further optimized the SAME with a regularization method to achieve much higher decoding performance. A SSVEP experiment was designed with various pixel densities (100%, 90%, 80%, 70%, 60%, 50%, 40%, 30%, 20%, 10% and 1%) and frequencies (low: 7Hz, 11Hz, and 15Hz; mid-to-high: 23Hz, 31Hz, and 39Hz) to verify our methods. The results indicated SAME significantly improved the classification accuracy compared to traditional method without the SAME, especially under very weak stimulation conditions (pixel densities ≤ 50%), with the maximum increase reaching 8.6%. Besides, regularization SAME further yielded a significant enhancement, achieved maximum improvements of 4.29% compared to SAME. The regularization SAME proposed in this study significantly improves SSVEP decoding performance under low-pixel density stimuli, paving the way for the development of comfortable and effective SSVEP-BCI.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Brain-Computer Interfaces
Humans
*Evoked Potentials, Visual/physiology
Electroencephalography/methods
Algorithms
Male
Signal-To-Noise Ratio
Photic Stimulation
Adult
Signal Processing, Computer-Assisted
Female
RevDate: 2025-12-03
CmpDate: 2025-12-03
Single Trial Classification of per-stimulus EEG between Different Speed Accuracy Tradeoffs Instruction.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-4.
The speed-accuracy tradeoff represents a cornerstone concept in cognitive processing, highlighting the inherent trade-off between decision-making speed and accuracy. Patients may have different speed-accuracy strategies during their neurologic consultation due to differences in understanding of instructions or increased diagnostic time. Despite extensive investigations into the neural mechanisms underpinning speed-accuracy trade-off (SAT), the classification of neural data to differentiate between distinct SAT strategies remains largely unexplored. This study bridges this critical gap by implementing a deep learning framework to classify single-trial EEG signals based on participants' instructed response strategies-either prioritizing speed or accuracy and leveraging a dataset from 20 participants engaged in a mirror-image judgment task. The data underwent preprocessing and were subsequently transformed using continuous wavelet transformation to extract time-frequency features. Employing a channel-stacking technique, we organized the EEG data into RGB-like images, which were then input into a RegNet convolutional neural network for classification. Ten-fold cross-validation results demonstrated that the occipital region achieved the highest classification accuracy (85.37%), followed by the parietal (82.97%), frontal (80.46%), and central regions (78.57%). This study not only validates the feasibility of single-trial EEG classification in distinguishing between speed and accuracy strategies but also highlights its potential applications in adaptive brain-computer interfaces and cognitive neuroscience research.Clinical Relevance- This study provides a novel method for real-time identification of cognitive strategies (speed vs. accuracy prioritization) via EEG, offering clinicians a tool to tailor neurofeedback or rehabilitation protocols based on individualized neural signatures.
Additional Links: PMID-41336422
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336422,
year = {2025},
author = {Li, H and Zhang, M and Karkkainen, T and Meng, Z},
title = {Single Trial Classification of per-stimulus EEG between Different Speed Accuracy Tradeoffs Instruction.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-4},
doi = {10.1109/EMBC58623.2025.11254002},
pmid = {41336422},
issn = {2694-0604},
mesh = {Humans ; *Electroencephalography/methods ; Male ; Female ; *Signal Processing, Computer-Assisted ; Adult ; Deep Learning ; Neural Networks, Computer ; Young Adult ; },
abstract = {The speed-accuracy tradeoff represents a cornerstone concept in cognitive processing, highlighting the inherent trade-off between decision-making speed and accuracy. Patients may have different speed-accuracy strategies during their neurologic consultation due to differences in understanding of instructions or increased diagnostic time. Despite extensive investigations into the neural mechanisms underpinning speed-accuracy trade-off (SAT), the classification of neural data to differentiate between distinct SAT strategies remains largely unexplored. This study bridges this critical gap by implementing a deep learning framework to classify single-trial EEG signals based on participants' instructed response strategies-either prioritizing speed or accuracy and leveraging a dataset from 20 participants engaged in a mirror-image judgment task. The data underwent preprocessing and were subsequently transformed using continuous wavelet transformation to extract time-frequency features. Employing a channel-stacking technique, we organized the EEG data into RGB-like images, which were then input into a RegNet convolutional neural network for classification. Ten-fold cross-validation results demonstrated that the occipital region achieved the highest classification accuracy (85.37%), followed by the parietal (82.97%), frontal (80.46%), and central regions (78.57%). This study not only validates the feasibility of single-trial EEG classification in distinguishing between speed and accuracy strategies but also highlights its potential applications in adaptive brain-computer interfaces and cognitive neuroscience research.Clinical Relevance- This study provides a novel method for real-time identification of cognitive strategies (speed vs. accuracy prioritization) via EEG, offering clinicians a tool to tailor neurofeedback or rehabilitation protocols based on individualized neural signatures.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Electroencephalography/methods
Male
Female
*Signal Processing, Computer-Assisted
Adult
Deep Learning
Neural Networks, Computer
Young Adult
RevDate: 2025-12-03
CmpDate: 2025-12-03
A Study of Brain-Computer Interface Recognition Performance Crossing Action Observation Paradigms.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-4.
Action observation-based brain-computer interface (AO-BCI) could induce visual motor imagery through biological motion while relying on its movement frequency to stimulate steady-state visual evoked potential. This hybrid BCI with dual-brain-region activation offers significant potential for stroke rehabilitation. Since varying AO paradigms are employed in the rehabilitation of different limb movements, a limited training dataset can compromise recognition performance. Thus, this study tried to investigate the BCI performance crossing different AO paradigms for the first time. Three AO paradigms, each containing four actions, were designed to establish an online BCI system. Task discriminant component analysis was utilized to analyze the online and offline EEG data. Three training schemes were developed to construct spatial filters including target session (TS) data, source session (SS) data, and a combination of both. Results indicated that the paradigm content significantly affected the recognition performance (F=7.65, p=0.0039). The recognition accuracies of the four actions for each AO paradigm were 71.86%, 89.71%, and 82.71%, respectively. Among the three training schemes, the combined TS and SS data approach notably enhanced recognition accuracy for the AO paradigm with poor performance using TS data alone (p=0.0319). This study demonstrated that EEG data from existing AO paradigms can be used to construct training sets for new paradigms. And combining a small amount of data from the new paradigm could improve the recognition performance. Future research should focus on developing data calibration methods specific to cross-AO paradigms to further enhance recognition accuracy. This work will provide valuable insights for advancing AO-BCI applications in rehabilitation.
Additional Links: PMID-41336408
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336408,
year = {2025},
author = {Hu, G and Zeng, F and Tang, H and Zhao, Y and Zhang, X},
title = {A Study of Brain-Computer Interface Recognition Performance Crossing Action Observation Paradigms.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-4},
doi = {10.1109/EMBC58623.2025.11254040},
pmid = {41336408},
issn = {2694-0604},
mesh = {*Brain-Computer Interfaces ; Humans ; Electroencephalography/methods ; Male ; Adult ; Female ; Evoked Potentials, Visual/physiology ; Movement/physiology ; },
abstract = {Action observation-based brain-computer interface (AO-BCI) could induce visual motor imagery through biological motion while relying on its movement frequency to stimulate steady-state visual evoked potential. This hybrid BCI with dual-brain-region activation offers significant potential for stroke rehabilitation. Since varying AO paradigms are employed in the rehabilitation of different limb movements, a limited training dataset can compromise recognition performance. Thus, this study tried to investigate the BCI performance crossing different AO paradigms for the first time. Three AO paradigms, each containing four actions, were designed to establish an online BCI system. Task discriminant component analysis was utilized to analyze the online and offline EEG data. Three training schemes were developed to construct spatial filters including target session (TS) data, source session (SS) data, and a combination of both. Results indicated that the paradigm content significantly affected the recognition performance (F=7.65, p=0.0039). The recognition accuracies of the four actions for each AO paradigm were 71.86%, 89.71%, and 82.71%, respectively. Among the three training schemes, the combined TS and SS data approach notably enhanced recognition accuracy for the AO paradigm with poor performance using TS data alone (p=0.0319). This study demonstrated that EEG data from existing AO paradigms can be used to construct training sets for new paradigms. And combining a small amount of data from the new paradigm could improve the recognition performance. Future research should focus on developing data calibration methods specific to cross-AO paradigms to further enhance recognition accuracy. This work will provide valuable insights for advancing AO-BCI applications in rehabilitation.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Brain-Computer Interfaces
Humans
Electroencephalography/methods
Male
Adult
Female
Evoked Potentials, Visual/physiology
Movement/physiology
RevDate: 2025-12-03
CmpDate: 2025-12-03
Transfer Learning in EEG-based Reinforcement Learning Brain Machine Interfaces via Q-learning Kernel Temporal Differences.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-4.
Reinforcement learning based brain machine interfaces (RLBMIs) is an emerging technology with many possible real-time applications. Transfer learning (TL) has proved beneficial as it can improve performance of machine learning algorithms by reusing learned knowledge from similar tasks. However, its application in BMIs has mainly focused on supervised learning approaches. In this study, we investigate the effect of TL in RLBMIs to decode freewill movement related intentions using multichannel scalp electroencephalogram (EEG). We applied TL strategies to Q-learning Kernel Temporal Difference (Q-KTD), which is an algorithm to estimate the action value function, Q, by a nonlinear function approximator using kernel methods. A publicly available EEG dataset recorded while healthy adult participants conduct a key pressing task was used to decode premovement (before movement onset) and movement intention (after movement onset). Differently from most cue-based tasks, participants had freewill to choose the key being pressed, providing unique neural dynamics for decoding. TL was applied between and within subjects to decode the movement related intentions. Significant increase on success rates (p < 0.01) were observed in 96% cases. The success rate increases in each case ranged from 1.39 to 10.69%. These results support the use of TL as an effective way to improve the efficiency of RL-based neural decoder's learning.Clinical Relevance- The improved performance of the neural decoder using transfer learning provides efficient modeling strategy of RLBMIs that can assist patients with neurological disorders.
Additional Links: PMID-41336362
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336362,
year = {2025},
author = {McDorman, RA and Raj Thapa, B and Kim, J and Bae, J},
title = {Transfer Learning in EEG-based Reinforcement Learning Brain Machine Interfaces via Q-learning Kernel Temporal Differences.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-4},
doi = {10.1109/EMBC58623.2025.11253000},
pmid = {41336362},
issn = {2694-0604},
mesh = {Humans ; *Electroencephalography/methods ; *Brain-Computer Interfaces ; *Machine Learning ; Male ; Adult ; Algorithms ; Female ; Young Adult ; Movement ; },
abstract = {Reinforcement learning based brain machine interfaces (RLBMIs) is an emerging technology with many possible real-time applications. Transfer learning (TL) has proved beneficial as it can improve performance of machine learning algorithms by reusing learned knowledge from similar tasks. However, its application in BMIs has mainly focused on supervised learning approaches. In this study, we investigate the effect of TL in RLBMIs to decode freewill movement related intentions using multichannel scalp electroencephalogram (EEG). We applied TL strategies to Q-learning Kernel Temporal Difference (Q-KTD), which is an algorithm to estimate the action value function, Q, by a nonlinear function approximator using kernel methods. A publicly available EEG dataset recorded while healthy adult participants conduct a key pressing task was used to decode premovement (before movement onset) and movement intention (after movement onset). Differently from most cue-based tasks, participants had freewill to choose the key being pressed, providing unique neural dynamics for decoding. TL was applied between and within subjects to decode the movement related intentions. Significant increase on success rates (p < 0.01) were observed in 96% cases. The success rate increases in each case ranged from 1.39 to 10.69%. These results support the use of TL as an effective way to improve the efficiency of RL-based neural decoder's learning.Clinical Relevance- The improved performance of the neural decoder using transfer learning provides efficient modeling strategy of RLBMIs that can assist patients with neurological disorders.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Electroencephalography/methods
*Brain-Computer Interfaces
*Machine Learning
Male
Adult
Algorithms
Female
Young Adult
Movement
RevDate: 2025-12-03
CmpDate: 2025-12-03
Towards the Correction of Covariate Shift in EEG-Based Passive Brain-Computer Interfaces for Out-of-Lab Applications.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
The increasing adoption of wearable EEG technology is enabling the development of passive Brain-Computer Interface (pBCI) systems for real-world applications, in the near future, such as Industry 5.0. However, one major challenge in classifying electroencephalographic (EEG) signals in these settings is covariate shift, which occurs when the distribution of the data changes between training and testing sessions due to variations in EEG headset positioning. This study investigates the effectiveness of a linear transformation approach to mitigate the negative effect of covariate shift. Simulations were conducted by using different shift conditions (i.e. deviation of the headset position from the original one), to evaluate (i) the performance of the transformation function used for mitigating the covariate shift occurrence and (ii) the importance that the change of reference and/or channels has on the classification performance. Results show that normalizing covariate shift-affected data (i.e., target) using shift-free data as a template (i.e., source) helps mitigate the negative impact of covariate shift, leading to improved classification performanceThe accuracy loss drops from 14% to 6% in the worst configuration and from 5% to 4% in the best configuration. This improvement is more pronounced when the shift is larger, i.e., when both the reference and channels change between the control dataset and the test dataset. These findings have significant implications for the development of robust and reliable pBCI models for out-of-the-lab contexts.
Additional Links: PMID-41336341
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336341,
year = {2025},
author = {Germano, D and Ronca, V and Capotorto, R and Di Flumeri, G and Borghini, G and Giorgi, A and Babiloni, F and Arico, P},
title = {Towards the Correction of Covariate Shift in EEG-Based Passive Brain-Computer Interfaces for Out-of-Lab Applications.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11252974},
pmid = {41336341},
issn = {2694-0604},
mesh = {*Brain-Computer Interfaces ; *Electroencephalography/methods ; Humans ; Signal Processing, Computer-Assisted ; Algorithms ; },
abstract = {The increasing adoption of wearable EEG technology is enabling the development of passive Brain-Computer Interface (pBCI) systems for real-world applications, in the near future, such as Industry 5.0. However, one major challenge in classifying electroencephalographic (EEG) signals in these settings is covariate shift, which occurs when the distribution of the data changes between training and testing sessions due to variations in EEG headset positioning. This study investigates the effectiveness of a linear transformation approach to mitigate the negative effect of covariate shift. Simulations were conducted by using different shift conditions (i.e. deviation of the headset position from the original one), to evaluate (i) the performance of the transformation function used for mitigating the covariate shift occurrence and (ii) the importance that the change of reference and/or channels has on the classification performance. Results show that normalizing covariate shift-affected data (i.e., target) using shift-free data as a template (i.e., source) helps mitigate the negative impact of covariate shift, leading to improved classification performanceThe accuracy loss drops from 14% to 6% in the worst configuration and from 5% to 4% in the best configuration. This improvement is more pronounced when the shift is larger, i.e., when both the reference and channels change between the control dataset and the test dataset. These findings have significant implications for the development of robust and reliable pBCI models for out-of-the-lab contexts.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Brain-Computer Interfaces
*Electroencephalography/methods
Humans
Signal Processing, Computer-Assisted
Algorithms
RevDate: 2025-12-03
CmpDate: 2025-12-03
Can ICA-Based Artifact Removal Affect Deep Learning Decoding Accuracy? Yes!.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
Regarding brain-computer interfaces (BCIs), the effectiveness of Independent Component Analysis (ICA) for artifact removal in traditional machine learning-based EEG decoding has been widely implemented. However, its utility in deep learning-based EEG decoding remains understudied. This paper investigated the impact of ICA-based artifact removal on the accuracy of deep learning models for decoding motor imagery and motor execution from EEG signals in short time windows. We employed an ICA-based artifact removal approach named ERASE for automatic artifact removal and evaluated the performance of three decoding approaches: CNN, LSTM, and CEBRA. Compared to before artifact removal, The F1-score improved by averages of 27.90% (CNN), 22.06% (LSTM), and 28.38% (CEBRA) after artifacts removal for motor execution tasks in healthy subjects. For motor imagery tasks in stroke patients,The F1-score improved by averages of 18.90% (CNN), 21.04% (LSTM), and 25.84% (CEBRA). Topographic maps and manifold visualizations further confirmed that ICA enhances the spatial specificity and interpretability of neural signals. These findings suggest that ICA-based artifact removal is a valuable preprocessing step for deep learning-based EEG decoding, particularly in scenarios with significant artifact contamination, offering potential benefits for clinical applications such as stroke rehabilitation.
Additional Links: PMID-41336339
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336339,
year = {2025},
author = {Hu, C and Liu, Q and Luo, J and Lu, Y and Jiang, N and Li, G and Huai, Y and Li, Y},
title = {Can ICA-Based Artifact Removal Affect Deep Learning Decoding Accuracy? Yes!.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11253033},
pmid = {41336339},
issn = {2694-0604},
mesh = {Humans ; *Deep Learning ; *Artifacts ; *Electroencephalography/methods ; Brain-Computer Interfaces ; *Signal Processing, Computer-Assisted ; Male ; Female ; Adult ; Stroke/physiopathology ; },
abstract = {Regarding brain-computer interfaces (BCIs), the effectiveness of Independent Component Analysis (ICA) for artifact removal in traditional machine learning-based EEG decoding has been widely implemented. However, its utility in deep learning-based EEG decoding remains understudied. This paper investigated the impact of ICA-based artifact removal on the accuracy of deep learning models for decoding motor imagery and motor execution from EEG signals in short time windows. We employed an ICA-based artifact removal approach named ERASE for automatic artifact removal and evaluated the performance of three decoding approaches: CNN, LSTM, and CEBRA. Compared to before artifact removal, The F1-score improved by averages of 27.90% (CNN), 22.06% (LSTM), and 28.38% (CEBRA) after artifacts removal for motor execution tasks in healthy subjects. For motor imagery tasks in stroke patients,The F1-score improved by averages of 18.90% (CNN), 21.04% (LSTM), and 25.84% (CEBRA). Topographic maps and manifold visualizations further confirmed that ICA enhances the spatial specificity and interpretability of neural signals. These findings suggest that ICA-based artifact removal is a valuable preprocessing step for deep learning-based EEG decoding, particularly in scenarios with significant artifact contamination, offering potential benefits for clinical applications such as stroke rehabilitation.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Deep Learning
*Artifacts
*Electroencephalography/methods
Brain-Computer Interfaces
*Signal Processing, Computer-Assisted
Male
Female
Adult
Stroke/physiopathology
RevDate: 2025-12-03
CmpDate: 2025-12-03
Decoding Human Attentive States from Spatial-temporal EEG Patches Using Transformers.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-5.
Learning the spatial topology of electroencephalogram (EEG) channels and their temporal dynamics is crucial for decoding attention states. This paper introduces EEG-PatchFormer, a transformer-based deep learning framework designed specifically for EEG attention classification in Brain-Computer Interface (BCI) applications. By integrating a Temporal CNN for frequency-based EEG feature extraction, a pointwise CNN for feature enhancement, and Spatial and Temporal Patching modules for organizing features into spatial-temporal patches, EEG-PatchFormer jointly learns spatial-temporal information from EEG data. Leveraging the global learning capabilities of the self-attention mechanism, it captures essential features across brain regions over time, thereby enhancing EEG data decoding performance. Demonstrating superior performance, EEG-PatchFormer surpasses existing benchmarks in accuracy, area under the ROC curve (AUC), and macro-F1 score on a public cognitive attention dataset. The code can be found via: https://github.com/yi-ding-cs/EEG-PatchFormer.
Additional Links: PMID-41336297
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336297,
year = {2025},
author = {Ding, Y and Lee, JH and Zhang, S and Luo, T and Guan, C},
title = {Decoding Human Attentive States from Spatial-temporal EEG Patches Using Transformers.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-5},
doi = {10.1109/EMBC58623.2025.11254148},
pmid = {41336297},
issn = {2694-0604},
mesh = {Humans ; *Electroencephalography/methods ; *Attention/physiology ; Brain-Computer Interfaces ; *Signal Processing, Computer-Assisted ; Deep Learning ; Algorithms ; Neural Networks, Computer ; ROC Curve ; },
abstract = {Learning the spatial topology of electroencephalogram (EEG) channels and their temporal dynamics is crucial for decoding attention states. This paper introduces EEG-PatchFormer, a transformer-based deep learning framework designed specifically for EEG attention classification in Brain-Computer Interface (BCI) applications. By integrating a Temporal CNN for frequency-based EEG feature extraction, a pointwise CNN for feature enhancement, and Spatial and Temporal Patching modules for organizing features into spatial-temporal patches, EEG-PatchFormer jointly learns spatial-temporal information from EEG data. Leveraging the global learning capabilities of the self-attention mechanism, it captures essential features across brain regions over time, thereby enhancing EEG data decoding performance. Demonstrating superior performance, EEG-PatchFormer surpasses existing benchmarks in accuracy, area under the ROC curve (AUC), and macro-F1 score on a public cognitive attention dataset. The code can be found via: https://github.com/yi-ding-cs/EEG-PatchFormer.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Electroencephalography/methods
*Attention/physiology
Brain-Computer Interfaces
*Signal Processing, Computer-Assisted
Deep Learning
Algorithms
Neural Networks, Computer
ROC Curve
RevDate: 2025-12-03
CmpDate: 2025-12-03
ChatBCI-4-ALS: A High-Performance, LLM-Driven, Intent-Based BCI Communication System for Individuals with ALS.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-6.
Amyotrophic lateral sclerosis (ALS) is a neurodegenerative disease that leads to significant motor and speech impairments, increasing the need for alternative means of communication to support quality of life. P300 speller brain computer interfaces (BCIs) have shown promise in facilitating non-muscular communication by detecting P300 event-related potentials (ERPs) in response to visual stimuli. However, these systems are generally slow and can not fully address the communication needs of ALS patients, specially, when the primary goal is to convey intent with minimal cognitive load. In this paper, we present ChatBCI-4-ALS, the first intent-based BCI communication system designed for individuals with ALS. ChatBCI-4-ALS leverages large language models (LLMs) and employs a dynamic flash algorithm to enhance typing speed, and enable efficient communication of the user's intent beyond exact lexical matches. Additionally, we introduce new semantic-based quantitative performance metrics to evaluate the effectiveness of intent-based communication. Results from online experiments suggest that ChatBCI-4-ALS achieves record-breaking average spelling speed of 23.87 char/min (with the best case scenario of 42.16 char/min), and a best information transfer rate (ITR) of 128.85 bits/min, marking an advancement in P300 BCI-based communication systems.
Additional Links: PMID-41336280
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336280,
year = {2025},
author = {Hong, J and Rao, P and Wang, W and Chen, S and Najafizadeh, L},
title = {ChatBCI-4-ALS: A High-Performance, LLM-Driven, Intent-Based BCI Communication System for Individuals with ALS.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-6},
doi = {10.1109/EMBC58623.2025.11253329},
pmid = {41336280},
issn = {2694-0604},
mesh = {*Amyotrophic Lateral Sclerosis/physiopathology ; Humans ; *Brain-Computer Interfaces ; Algorithms ; *Communication Devices for People with Disabilities ; Electroencephalography ; Event-Related Potentials, P300 ; Language ; },
abstract = {Amyotrophic lateral sclerosis (ALS) is a neurodegenerative disease that leads to significant motor and speech impairments, increasing the need for alternative means of communication to support quality of life. P300 speller brain computer interfaces (BCIs) have shown promise in facilitating non-muscular communication by detecting P300 event-related potentials (ERPs) in response to visual stimuli. However, these systems are generally slow and can not fully address the communication needs of ALS patients, specially, when the primary goal is to convey intent with minimal cognitive load. In this paper, we present ChatBCI-4-ALS, the first intent-based BCI communication system designed for individuals with ALS. ChatBCI-4-ALS leverages large language models (LLMs) and employs a dynamic flash algorithm to enhance typing speed, and enable efficient communication of the user's intent beyond exact lexical matches. Additionally, we introduce new semantic-based quantitative performance metrics to evaluate the effectiveness of intent-based communication. Results from online experiments suggest that ChatBCI-4-ALS achieves record-breaking average spelling speed of 23.87 char/min (with the best case scenario of 42.16 char/min), and a best information transfer rate (ITR) of 128.85 bits/min, marking an advancement in P300 BCI-based communication systems.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Amyotrophic Lateral Sclerosis/physiopathology
Humans
*Brain-Computer Interfaces
Algorithms
*Communication Devices for People with Disabilities
Electroencephalography
Event-Related Potentials, P300
Language
RevDate: 2025-12-03
CmpDate: 2025-12-03
Harmonic Component Analysis: A novel training-free and asynchronous BCI classification method.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
Assistive technologies can provide people with locked-in syndrome independence and improve their quality of life. However, existing brain-computer interfaces (BCI) can be unreliable and require excessive training. Therefore, we investigate the possibility of a training-free BCI that can provide asynchronous and online control of assistive robotic technologies. We propose the harmonic component analysis (HCA), a new training-free classifier for signals with known harmonic characteristics, such as steady-state visually evoked potentials. To validate the HCA, it is compared to the well-known canonical correlation analysis (CCA), using an offline data set of 10 healthy participants who performed cue trials with 16 SSVEP-targets. The HCA achieved better performance than a three-component CCA with up to 74% lower computational cost. For asynchronous control, the HCA achieved a detection accuracy of 85% with an average activation time of 1.6s, against 77% after an average of 1.7s for the CCA. For continuous activation, the HCA achieved a true positive rate of 65% with a false positive rate of 0. 59% from 2 s after cue onset until 5 s after, while the CCA achieved a true positive rate of 59% with a false positive rate of 0. 27%. Thus, the HCA is shown to be a well-suited SSVEP-classifier for systems that require asynchronous classification without the need for a calibration or training-session.
Additional Links: PMID-41336226
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336226,
year = {2025},
author = {Kaseler, RL and Andreasen Struijk, LNS},
title = {Harmonic Component Analysis: A novel training-free and asynchronous BCI classification method.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11253522},
pmid = {41336226},
issn = {2694-0604},
mesh = {*Brain-Computer Interfaces ; Humans ; *Electroencephalography/methods ; Adult ; Algorithms ; Male ; Evoked Potentials, Visual/physiology ; Signal Processing, Computer-Assisted ; Female ; },
abstract = {Assistive technologies can provide people with locked-in syndrome independence and improve their quality of life. However, existing brain-computer interfaces (BCI) can be unreliable and require excessive training. Therefore, we investigate the possibility of a training-free BCI that can provide asynchronous and online control of assistive robotic technologies. We propose the harmonic component analysis (HCA), a new training-free classifier for signals with known harmonic characteristics, such as steady-state visually evoked potentials. To validate the HCA, it is compared to the well-known canonical correlation analysis (CCA), using an offline data set of 10 healthy participants who performed cue trials with 16 SSVEP-targets. The HCA achieved better performance than a three-component CCA with up to 74% lower computational cost. For asynchronous control, the HCA achieved a detection accuracy of 85% with an average activation time of 1.6s, against 77% after an average of 1.7s for the CCA. For continuous activation, the HCA achieved a true positive rate of 65% with a false positive rate of 0. 59% from 2 s after cue onset until 5 s after, while the CCA achieved a true positive rate of 59% with a false positive rate of 0. 27%. Thus, the HCA is shown to be a well-suited SSVEP-classifier for systems that require asynchronous classification without the need for a calibration or training-session.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Brain-Computer Interfaces
Humans
*Electroencephalography/methods
Adult
Algorithms
Male
Evoked Potentials, Visual/physiology
Signal Processing, Computer-Assisted
Female
RevDate: 2025-12-03
CmpDate: 2025-12-03
Optimal Transport and Contrastive Learning for Brain Decoding of Musical Perception.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
Brain decoding aims to reconstruct external stimuli from brain activity, providing insights into the neural representation of cognitive experiences. Music decoding from functional magnetic resonance imaging (fMRI) is particularly challenging due to the complexity of auditory processing and the temporal limitations of fMRI signals. In this study, we introduce a novel decoding framework that improves the alignment between fMRI activity and latent musical representations extracted using a pre-trained multimodal model (CLAP). We propose a dual-loss approach combining Optimal Transport and Contrastive Learning to enhance feature mapping and retrieval accuracy. The first loss ensures structural consistency between brain-predicted and true musical embeddings, while the contrastive loss refines the embedding space by maximizing similarities between corresponding pairs and minimizing non-correspondences. Using fMRI data from five subjects listening to music tracks from the GTZAN dataset, our method achieves improved decoding performance, surpassing traditional regression-based approaches from 22.1% top-1 accuracy to 29.3%. These results highlight the potential of integrating Optimal Transport and Contrastive Learning to improve brain decoding performance, paving the way for extending the approach to different sensory domains and applications in Brain-Computer Interfaces (BCI).Clinical relevance- This study could have clinical implications for understanding auditory processing disorders and developing neurorehabilitation strategies. By elucidating how the brain encodes complex auditory stimuli, this approach may contribute to BCI applications for speech and music perception restoration in individuals with hearing impairments or neurological conditions affecting auditory cognition.
Additional Links: PMID-41336218
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336218,
year = {2025},
author = {Ciferri, M and Ferrante, M and Toschi, N},
title = {Optimal Transport and Contrastive Learning for Brain Decoding of Musical Perception.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11253498},
pmid = {41336218},
issn = {2694-0604},
mesh = {*Music ; Humans ; Magnetic Resonance Imaging ; *Brain/physiology/diagnostic imaging ; *Auditory Perception/physiology ; Brain Mapping/methods ; Brain-Computer Interfaces ; Algorithms ; Male ; *Learning ; },
abstract = {Brain decoding aims to reconstruct external stimuli from brain activity, providing insights into the neural representation of cognitive experiences. Music decoding from functional magnetic resonance imaging (fMRI) is particularly challenging due to the complexity of auditory processing and the temporal limitations of fMRI signals. In this study, we introduce a novel decoding framework that improves the alignment between fMRI activity and latent musical representations extracted using a pre-trained multimodal model (CLAP). We propose a dual-loss approach combining Optimal Transport and Contrastive Learning to enhance feature mapping and retrieval accuracy. The first loss ensures structural consistency between brain-predicted and true musical embeddings, while the contrastive loss refines the embedding space by maximizing similarities between corresponding pairs and minimizing non-correspondences. Using fMRI data from five subjects listening to music tracks from the GTZAN dataset, our method achieves improved decoding performance, surpassing traditional regression-based approaches from 22.1% top-1 accuracy to 29.3%. These results highlight the potential of integrating Optimal Transport and Contrastive Learning to improve brain decoding performance, paving the way for extending the approach to different sensory domains and applications in Brain-Computer Interfaces (BCI).Clinical relevance- This study could have clinical implications for understanding auditory processing disorders and developing neurorehabilitation strategies. By elucidating how the brain encodes complex auditory stimuli, this approach may contribute to BCI applications for speech and music perception restoration in individuals with hearing impairments or neurological conditions affecting auditory cognition.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Music
Humans
Magnetic Resonance Imaging
*Brain/physiology/diagnostic imaging
*Auditory Perception/physiology
Brain Mapping/methods
Brain-Computer Interfaces
Algorithms
Male
*Learning
RevDate: 2025-12-03
CmpDate: 2025-12-03
A Spatio-Spectral Analysis of Decoding Imagined Speech from the Idle State.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
Studies into speech imagery (SI) classification from electroencephalogram (EEG) data have generally focused on distinguishing imagined words from each other, but accurate discrimination from the idle state, when the user is relaxed, is also necessary for asynchronous brain-computer interfaces (BCIs). In this study, frequency bands and scalp regions most important for distinguishing SI from the idle state were identified and related to underlying neural processes. Power spectral density (PSD) features were extracted from each channel, and a statistical analysis of the features, as well as a classification analysis involving six classifiers, was carried out. The parietal region was identified as the most important scalp region, whilst the delta, theta, and gamma bands were the most important frequency bands. Furthermore, the importance of the alpha band, and of the temporal, frontal-temporal, frontal-central, and parietal regions varied significantly between the SI vs Idle and SI vs SI classification problems, highlighting the importance of including the idle state in SI classification studies.Clinical Relevance-This study identifies frequency bands and scalp regions that are significantly important for the SI vs Idle classification problem, which is important for asynchronous SI BCIs.
Additional Links: PMID-41336204
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336204,
year = {2025},
author = {Padfield, N and Turk, S and Mujahid, K and Camilleri, T and Peng, Y and Camilleri, K},
title = {A Spatio-Spectral Analysis of Decoding Imagined Speech from the Idle State.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11253510},
pmid = {41336204},
issn = {2694-0604},
mesh = {Humans ; *Electroencephalography/methods ; *Speech/physiology ; *Imagination/physiology ; Brain-Computer Interfaces ; Male ; Adult ; Female ; },
abstract = {Studies into speech imagery (SI) classification from electroencephalogram (EEG) data have generally focused on distinguishing imagined words from each other, but accurate discrimination from the idle state, when the user is relaxed, is also necessary for asynchronous brain-computer interfaces (BCIs). In this study, frequency bands and scalp regions most important for distinguishing SI from the idle state were identified and related to underlying neural processes. Power spectral density (PSD) features were extracted from each channel, and a statistical analysis of the features, as well as a classification analysis involving six classifiers, was carried out. The parietal region was identified as the most important scalp region, whilst the delta, theta, and gamma bands were the most important frequency bands. Furthermore, the importance of the alpha band, and of the temporal, frontal-temporal, frontal-central, and parietal regions varied significantly between the SI vs Idle and SI vs SI classification problems, highlighting the importance of including the idle state in SI classification studies.Clinical Relevance-This study identifies frequency bands and scalp regions that are significantly important for the SI vs Idle classification problem, which is important for asynchronous SI BCIs.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Electroencephalography/methods
*Speech/physiology
*Imagination/physiology
Brain-Computer Interfaces
Male
Adult
Female
RevDate: 2025-12-03
CmpDate: 2025-12-03
Fine-Tuning Strategies for Continual Online EEG Motor Imagery Decoding: Insights from a Large-Scale Longitudinal Study.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
This study investigates continual fine-tuning strategies for deep learning in online longitudinal electroencephalography (EEG) motor imagery (MI) decoding within a causal setting involving a large user group and multiple sessions per participant. We are the first to explore such strategies across a large user group, as longitudinal adaptation is typically studied in the single-subject setting with a single adaptation strategy, which limits the ability to generalize findings. First, we examine the impact of different fine-tuning approaches on decoder performance and stability. Building on this, we integrate online test-time adaptation (OTTA) to adapt the model during deployment, complementing the effects of prior fine-tuning. Our findings demonstrate that fine-tuning that successively builds on prior subject-specific information improves both performance and stability, while OTTA effectively adapts the model to evolving data distributions across consecutive sessions, enabling calibration-free operation. These results offer valuable insights and recommendations for future research in longitudinal online MI decoding and highlight the importance of combining domain adaptation strategies for improving BCI performance in real-world applications.Clinical Relevance-Our investigation enables more stable and efficient long-term motor imagery decoding, which is critical for neurorehabilitation and assistive technologies.
Additional Links: PMID-41336201
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336201,
year = {2025},
author = {Wimpff, M and Aristimunha, B and Chevallier, S and Yang, B},
title = {Fine-Tuning Strategies for Continual Online EEG Motor Imagery Decoding: Insights from a Large-Scale Longitudinal Study.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11253543},
pmid = {41336201},
issn = {2694-0604},
mesh = {*Electroencephalography/methods ; Humans ; Longitudinal Studies ; Brain-Computer Interfaces ; *Imagination/physiology ; Male ; Signal Processing, Computer-Assisted ; Deep Learning ; Adult ; Female ; Algorithms ; Movement ; },
abstract = {This study investigates continual fine-tuning strategies for deep learning in online longitudinal electroencephalography (EEG) motor imagery (MI) decoding within a causal setting involving a large user group and multiple sessions per participant. We are the first to explore such strategies across a large user group, as longitudinal adaptation is typically studied in the single-subject setting with a single adaptation strategy, which limits the ability to generalize findings. First, we examine the impact of different fine-tuning approaches on decoder performance and stability. Building on this, we integrate online test-time adaptation (OTTA) to adapt the model during deployment, complementing the effects of prior fine-tuning. Our findings demonstrate that fine-tuning that successively builds on prior subject-specific information improves both performance and stability, while OTTA effectively adapts the model to evolving data distributions across consecutive sessions, enabling calibration-free operation. These results offer valuable insights and recommendations for future research in longitudinal online MI decoding and highlight the importance of combining domain adaptation strategies for improving BCI performance in real-world applications.Clinical Relevance-Our investigation enables more stable and efficient long-term motor imagery decoding, which is critical for neurorehabilitation and assistive technologies.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Electroencephalography/methods
Humans
Longitudinal Studies
Brain-Computer Interfaces
*Imagination/physiology
Male
Signal Processing, Computer-Assisted
Deep Learning
Adult
Female
Algorithms
Movement
RevDate: 2025-12-03
CmpDate: 2025-12-03
AI-Driven Neurodiagnostics: A Scalable Framework for EEG Anomaly Detection Using a Distributed-Delay Neural Mass Model.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
The integration of biophysically grounded neural simulations with Artificial Intelligence (AI) has the potential to transform clinical neurodiagnostics by overcoming the inherent challenges of limited pathological EEG datasets. We present a novel AI-driven framework that leverages a Distributed-Delay Neural Mass Model (DD-NMM) to generate synthetic EEG signals replicating both healthy and pathological brain states. Through systematic parameter tuning and domain-specific data augmentation, we enrich the diversity of simulated signals, enabling robust anomaly detection using machine learning techniques. Our approach integrates supervised classification and unsupervised one-class anomaly detection, achieving over 95% accuracy in synthetic tests and over 89% when applied to empirical EEG data from epilepsy patients and healthy volunteers. By providing an engineered solution that bridges computational neuroscience with AI, this framework enhances early seizure detection, adaptive neurofeedback, and brain-computer interface applications. Our results demonstrate that theory-driven simulation, combined with state-of-the-art machine learning, can address critical gaps in medical AI, significantly advancing clinical neuroengineering.Clinical relevance- This study provides a scalable and interpretable AI-driven method for EEG anomaly detection, which can support clinicians in identifying seizure patterns and other neurological disorders with high accuracy. The integration of computational neuroscience with AI-based diagnostics offers a potential pathway for early intervention and personalized neurotherapeutic strategies.
Additional Links: PMID-41336191
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336191,
year = {2025},
author = {Gonzalez-Mitjans, A and Salinas-Medina, A and Toussaint, PJ and Valdes-Sosa, P and Evans, A},
title = {AI-Driven Neurodiagnostics: A Scalable Framework for EEG Anomaly Detection Using a Distributed-Delay Neural Mass Model.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11253534},
pmid = {41336191},
issn = {2694-0604},
mesh = {Humans ; *Electroencephalography/methods ; *Artificial Intelligence ; Epilepsy/diagnosis/physiopathology ; Machine Learning ; Signal Processing, Computer-Assisted ; Brain-Computer Interfaces ; Algorithms ; Neural Networks, Computer ; },
abstract = {The integration of biophysically grounded neural simulations with Artificial Intelligence (AI) has the potential to transform clinical neurodiagnostics by overcoming the inherent challenges of limited pathological EEG datasets. We present a novel AI-driven framework that leverages a Distributed-Delay Neural Mass Model (DD-NMM) to generate synthetic EEG signals replicating both healthy and pathological brain states. Through systematic parameter tuning and domain-specific data augmentation, we enrich the diversity of simulated signals, enabling robust anomaly detection using machine learning techniques. Our approach integrates supervised classification and unsupervised one-class anomaly detection, achieving over 95% accuracy in synthetic tests and over 89% when applied to empirical EEG data from epilepsy patients and healthy volunteers. By providing an engineered solution that bridges computational neuroscience with AI, this framework enhances early seizure detection, adaptive neurofeedback, and brain-computer interface applications. Our results demonstrate that theory-driven simulation, combined with state-of-the-art machine learning, can address critical gaps in medical AI, significantly advancing clinical neuroengineering.Clinical relevance- This study provides a scalable and interpretable AI-driven method for EEG anomaly detection, which can support clinicians in identifying seizure patterns and other neurological disorders with high accuracy. The integration of computational neuroscience with AI-based diagnostics offers a potential pathway for early intervention and personalized neurotherapeutic strategies.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Electroencephalography/methods
*Artificial Intelligence
Epilepsy/diagnosis/physiopathology
Machine Learning
Signal Processing, Computer-Assisted
Brain-Computer Interfaces
Algorithms
Neural Networks, Computer
RevDate: 2025-12-03
CmpDate: 2025-12-03
Emotion Decoding and Consciousness Evaluation in patients with DOC through EEG Microstate analysis.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
Clinicians commonly employ the Coma Recovery Scale-Revised (CRS-R) as a standard tool for assessing patients with disorders of consciousness (DOC). However, the assessment is easily affected by subjective judgment, and patients with DOC are usually unable to provide adequate behavioral responses. Previous studies have indicated that emotion recognition-based brain-computer interface (BCI) can assist in the assessment of DOC, yet they lack more specific and quantitative indicators. This study is the first to apply electroencephalography (EEG) microstates for emotion recognition in patients with DOC. Specifically, EEG microstates were utilized to capture crucial spatio-temporal features of EEG signals, simplifying the rapidly changing EEG signals into a series of prototype topoplots. In this study, EEG data was recorded from 9 patients with DOC and 11 healthy volunteers. Among healthy participants, our system achieved an average classification accuracy of 94.16%, effectively demonstrating its success in eliciting and recognizing emotions. When applied to patients with DOC, the system yielded an average classification accuracy of 77.94%. The results of this study indicate that EEG microstate dynamics are associated with conscious processing in patients with DOC. However, further validation in a larger patient dataset is required to confirm these preliminary findings.
Additional Links: PMID-41336140
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336140,
year = {2025},
author = {Huang, H and Chen, Z and You, Q and Pan, J and Xiao, J},
title = {Emotion Decoding and Consciousness Evaluation in patients with DOC through EEG Microstate analysis.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11253041},
pmid = {41336140},
issn = {2694-0604},
mesh = {Humans ; *Electroencephalography/methods ; *Emotions/physiology ; *Consciousness Disorders/physiopathology/diagnosis ; Male ; Female ; Adult ; Brain-Computer Interfaces ; *Consciousness ; Middle Aged ; Signal Processing, Computer-Assisted ; },
abstract = {Clinicians commonly employ the Coma Recovery Scale-Revised (CRS-R) as a standard tool for assessing patients with disorders of consciousness (DOC). However, the assessment is easily affected by subjective judgment, and patients with DOC are usually unable to provide adequate behavioral responses. Previous studies have indicated that emotion recognition-based brain-computer interface (BCI) can assist in the assessment of DOC, yet they lack more specific and quantitative indicators. This study is the first to apply electroencephalography (EEG) microstates for emotion recognition in patients with DOC. Specifically, EEG microstates were utilized to capture crucial spatio-temporal features of EEG signals, simplifying the rapidly changing EEG signals into a series of prototype topoplots. In this study, EEG data was recorded from 9 patients with DOC and 11 healthy volunteers. Among healthy participants, our system achieved an average classification accuracy of 94.16%, effectively demonstrating its success in eliciting and recognizing emotions. When applied to patients with DOC, the system yielded an average classification accuracy of 77.94%. The results of this study indicate that EEG microstate dynamics are associated with conscious processing in patients with DOC. However, further validation in a larger patient dataset is required to confirm these preliminary findings.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Electroencephalography/methods
*Emotions/physiology
*Consciousness Disorders/physiopathology/diagnosis
Male
Female
Adult
Brain-Computer Interfaces
*Consciousness
Middle Aged
Signal Processing, Computer-Assisted
RevDate: 2025-12-03
CmpDate: 2025-12-03
EEG-based Auditory Attention Switch Detection with Multi-scale Gated Attention and Multi-task Learning based Hierarchical Spatiotemporal Networks.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-5.
Auditory attention switch detection (AASD) poses significant challenges for adaptive neurotechnologies, particularly under electroencephalogram (EEG) with low signal-to-noise ratios (SNRs). However, the performance of existing methods is limited due to insufficient feature discriminability and high detection delay. To solve the problem, this paper proposes a Hierarchical Spatiotemporal Network (HSTN) for detecting auditory attention switch from EEG signals. The model employs a hierarchical spatiotemporal encoder to extract spatiotemporal features of EEG signals, integrates short-term transient and long-term dependency information through a multi-scale gated attention mechanism, and synchronously optimizes auditory attention switch detection and auditory attention decoding tasks via a multi-task joint training strategy. Experimental results demonstrate that HSTN significantly outperforms baseline models in both auditory attention switch detection (AASD F1=0.89, accuracy 88.6%) and auditory attention decoding tasks (AAD accuracy 89.3%), with superior model parameter efficiency and inference time. Ablation experiments further validate the critical roles of multi-task learning, gated attention, and multi-scale convolutions. This study provides an efficient solution for auditory attention switch detection in complex auditory scenarios.Clinical Relevance-The study confirms that spatiotemporal feature encoding combined with multi-task joint training significantly enhances performance in EEG attention switch detection, providing a practical technical framework for enabling dynamic sound source enhancement in intelligent hearing aids and auditory brain-computer interface systems.
Additional Links: PMID-41336101
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336101,
year = {2025},
author = {Wang, X and Wang, L and Ding, Y and Chen, F},
title = {EEG-based Auditory Attention Switch Detection with Multi-scale Gated Attention and Multi-task Learning based Hierarchical Spatiotemporal Networks.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-5},
doi = {10.1109/EMBC58623.2025.11253070},
pmid = {41336101},
issn = {2694-0604},
mesh = {*Electroencephalography/methods ; *Attention/physiology ; Humans ; Signal Processing, Computer-Assisted ; Algorithms ; Signal-To-Noise Ratio ; *Auditory Perception/physiology ; },
abstract = {Auditory attention switch detection (AASD) poses significant challenges for adaptive neurotechnologies, particularly under electroencephalogram (EEG) with low signal-to-noise ratios (SNRs). However, the performance of existing methods is limited due to insufficient feature discriminability and high detection delay. To solve the problem, this paper proposes a Hierarchical Spatiotemporal Network (HSTN) for detecting auditory attention switch from EEG signals. The model employs a hierarchical spatiotemporal encoder to extract spatiotemporal features of EEG signals, integrates short-term transient and long-term dependency information through a multi-scale gated attention mechanism, and synchronously optimizes auditory attention switch detection and auditory attention decoding tasks via a multi-task joint training strategy. Experimental results demonstrate that HSTN significantly outperforms baseline models in both auditory attention switch detection (AASD F1=0.89, accuracy 88.6%) and auditory attention decoding tasks (AAD accuracy 89.3%), with superior model parameter efficiency and inference time. Ablation experiments further validate the critical roles of multi-task learning, gated attention, and multi-scale convolutions. This study provides an efficient solution for auditory attention switch detection in complex auditory scenarios.Clinical Relevance-The study confirms that spatiotemporal feature encoding combined with multi-task joint training significantly enhances performance in EEG attention switch detection, providing a practical technical framework for enabling dynamic sound source enhancement in intelligent hearing aids and auditory brain-computer interface systems.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Electroencephalography/methods
*Attention/physiology
Humans
Signal Processing, Computer-Assisted
Algorithms
Signal-To-Noise Ratio
*Auditory Perception/physiology
RevDate: 2025-12-03
CmpDate: 2025-12-03
EEGScaler: A Deep Learning Network to Scale EEG Electrode and Samples for Hand Motor Imagery Speed Decoding.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-4.
Motor Imagery (MI)-based Brain-Computer Interface (MI-BCI) systems induce neuroplasticity, promoting rehabilitation in stroke patients. Existing MI-BCI systems decode bilateral MI actions from Electroencephalogram (EEG) data to facilitate motor recovery. However, such systems offer limited degrees of freedom. Decoding kinematics information, such as movement speed can enhance control and provide a more natural interface with the environment. Decoding speed-related information from unilateral MI tasks is challenging due to the significant spatial overlap of neuronal sources and the inherently low spatial resolution of EEG. To address this, we propose EEGScaler, an end-to-end deep learning framework designed to decode slow v/s fast MI tasks by adaptively scaling EEG samples and electrodes with high discriminative value. EEGScaler leverages a Multi-Layer Perceptron (MLP) network to assign scale factors to both samples and electrodes. Spatiotemporal features are subsequently extracted using temporal and depth-wise convolution filters. The model is pre-trained on subject-independent data to learn filter weights, while subject-specific fine-tuning further optimizes the MLP-based scaling mechanism. The EEGScaler model performance is evaluated on 14 healthy subjects' data recorded while performing slow v/s fast unilateral MI tasks. The proposed model achieves an average cross-validated accuracy of 65. 98% for decoding fast v/s slow MI speed tasks, outperforming existing methods by approximately 6%. The subject-specific scaling of samples and electrodes using an end-to-end deep learning model for speed from unilateral MI tasks is novel. By effectively decoding movement speed, EEGScaler enhances the degree of freedom in MI-BCI systems, paving the way for more intuitive and efficient neurorehabilitation applications.Clinical Relevance- This advancement has the potential to improve motor rehabilitation strategies by enabling more precise and adaptive BCI-driven therapy tailored to individual recovery needs.
Additional Links: PMID-41336083
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336083,
year = {2025},
author = {Parashiva, PK and Gangadharan K, S and Vinod, AP},
title = {EEGScaler: A Deep Learning Network to Scale EEG Electrode and Samples for Hand Motor Imagery Speed Decoding.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-4},
doi = {10.1109/EMBC58623.2025.11251649},
pmid = {41336083},
issn = {2694-0604},
mesh = {Humans ; *Electroencephalography/instrumentation ; *Deep Learning ; Brain-Computer Interfaces ; Electrodes ; *Hand/physiology ; *Imagination/physiology ; Signal Processing, Computer-Assisted ; Movement ; Algorithms ; Male ; },
abstract = {Motor Imagery (MI)-based Brain-Computer Interface (MI-BCI) systems induce neuroplasticity, promoting rehabilitation in stroke patients. Existing MI-BCI systems decode bilateral MI actions from Electroencephalogram (EEG) data to facilitate motor recovery. However, such systems offer limited degrees of freedom. Decoding kinematics information, such as movement speed can enhance control and provide a more natural interface with the environment. Decoding speed-related information from unilateral MI tasks is challenging due to the significant spatial overlap of neuronal sources and the inherently low spatial resolution of EEG. To address this, we propose EEGScaler, an end-to-end deep learning framework designed to decode slow v/s fast MI tasks by adaptively scaling EEG samples and electrodes with high discriminative value. EEGScaler leverages a Multi-Layer Perceptron (MLP) network to assign scale factors to both samples and electrodes. Spatiotemporal features are subsequently extracted using temporal and depth-wise convolution filters. The model is pre-trained on subject-independent data to learn filter weights, while subject-specific fine-tuning further optimizes the MLP-based scaling mechanism. The EEGScaler model performance is evaluated on 14 healthy subjects' data recorded while performing slow v/s fast unilateral MI tasks. The proposed model achieves an average cross-validated accuracy of 65. 98% for decoding fast v/s slow MI speed tasks, outperforming existing methods by approximately 6%. The subject-specific scaling of samples and electrodes using an end-to-end deep learning model for speed from unilateral MI tasks is novel. By effectively decoding movement speed, EEGScaler enhances the degree of freedom in MI-BCI systems, paving the way for more intuitive and efficient neurorehabilitation applications.Clinical Relevance- This advancement has the potential to improve motor rehabilitation strategies by enabling more precise and adaptive BCI-driven therapy tailored to individual recovery needs.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Electroencephalography/instrumentation
*Deep Learning
Brain-Computer Interfaces
Electrodes
*Hand/physiology
*Imagination/physiology
Signal Processing, Computer-Assisted
Movement
Algorithms
Male
RevDate: 2025-12-03
CmpDate: 2025-12-03
Decoding Visual Imagination and Perception from EEG via Topomap Sequences.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
We propose a Topomap-based EEG decoding framework for distinguishing pictorial Imagination from Perception. By converting each trial's EEG signals into dense sequences of scalp voltage maps at short time intervals, our approach captures crucial spatiotemporal patterns that standard methods may overlook. We then apply a CNN with squeeze-and-excitation (SE) blocks to these Topomap "frames," enabling direct learning of both spatial topographies and rapid temporal fluctuations. Despite using only one trial per subject to simulate a data-scarce scenario, our model achieves 95.1% accuracy under a leave-one-subject-out (LOSO) cross-validation scheme. Results indicate clear neural distinctions between Imagination and Perception states, reflecting focused brain-region engagement during visual recall. In addition to confirming the viability of Topomaps as EEG feature representations, this study underscores their potential generalizability. We anticipate future extensions incorporating other modalities (orthographic, audio) and more advanced deep architectures will further expand the utility and robustness of this approach for brain-computer interface (BCI) applications.Clinical relevance- This framework offers a robust method for accurately distinguishing visual Imagination from Perception, even in data-scarce scenarios. It holds potential for enhancing diagnostic tools in cognitive disorders and refining BCI applications in clinical settings.
Additional Links: PMID-41336067
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336067,
year = {2025},
author = {Ahmadi, H and Mesin, L},
title = {Decoding Visual Imagination and Perception from EEG via Topomap Sequences.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11251641},
pmid = {41336067},
issn = {2694-0604},
mesh = {*Electroencephalography/methods ; Humans ; *Imagination/physiology ; *Visual Perception/physiology ; Brain-Computer Interfaces ; Signal Processing, Computer-Assisted ; Neural Networks, Computer ; Adult ; Male ; },
abstract = {We propose a Topomap-based EEG decoding framework for distinguishing pictorial Imagination from Perception. By converting each trial's EEG signals into dense sequences of scalp voltage maps at short time intervals, our approach captures crucial spatiotemporal patterns that standard methods may overlook. We then apply a CNN with squeeze-and-excitation (SE) blocks to these Topomap "frames," enabling direct learning of both spatial topographies and rapid temporal fluctuations. Despite using only one trial per subject to simulate a data-scarce scenario, our model achieves 95.1% accuracy under a leave-one-subject-out (LOSO) cross-validation scheme. Results indicate clear neural distinctions between Imagination and Perception states, reflecting focused brain-region engagement during visual recall. In addition to confirming the viability of Topomaps as EEG feature representations, this study underscores their potential generalizability. We anticipate future extensions incorporating other modalities (orthographic, audio) and more advanced deep architectures will further expand the utility and robustness of this approach for brain-computer interface (BCI) applications.Clinical relevance- This framework offers a robust method for accurately distinguishing visual Imagination from Perception, even in data-scarce scenarios. It holds potential for enhancing diagnostic tools in cognitive disorders and refining BCI applications in clinical settings.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Electroencephalography/methods
Humans
*Imagination/physiology
*Visual Perception/physiology
Brain-Computer Interfaces
Signal Processing, Computer-Assisted
Neural Networks, Computer
Adult
Male
RevDate: 2025-12-03
CmpDate: 2025-12-03
A Dynamic Mutual Information Measure of Phase Amplitude Coupling.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
Phase-amplitude coupling (PAC) is a fundamental neural phenomenon in which the phase of a slow oscillation modulates the amplitude of a faster oscillation. PAC has been implicated in various cognitive and clinical conditions, including Parkinson's disease, epilepsy, and depression. Traditional methods for quantifying PAC compute a single summary statistic over an entire time series, limiting their ability to capture dynamic fluctuations. Growing interest in time-varying PAC has led to methods that rely on windowed time-series analysis, but these approaches struggle to track rapid changes in coupling at single-sample resolution. To address this limitation, we propose a novel dynamic mutual information measure of PAC, leveraging a state-space modeling approach based on a Gamma generalized linear model (GLM). By introducing a Gauss-Markov process on the regression weights, our method enables dynamic, interpretable PAC estimation at each time point. We validate our approach using synthetic phase-amplitude coupled signals with time-varying coupling coefficients and demonstrate superior performance in smoothly tracking PAC over time and distinguishing coupled from uncoupled states. Additionally, we apply our technique to sleep EEG data, successfully identifying PAC during sleep spindles, which may serve as a biomarker for neurophysiological conditions such as Alzheimer's disease. Our findings suggest that this dynamic PAC measure is a powerful tool for neuroscientific and clinical research, with potential applications in real-time brain-computer interfaces and neurostimulation protocols.Clinical relevanceThis work demonstrates a new technique for quantifying time-varying electrophysiological coupling. This may allow for understanding transient neural dynamics in disease states and may help more robustly inform electrical stimulation protocols for patients with neurodegenerative disorders.
Additional Links: PMID-41336065
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41336065,
year = {2025},
author = {Perley, AS and Coleman, TP},
title = {A Dynamic Mutual Information Measure of Phase Amplitude Coupling.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11251622},
pmid = {41336065},
issn = {2694-0604},
mesh = {Humans ; *Electroencephalography/methods ; Algorithms ; *Signal Processing, Computer-Assisted ; Linear Models ; Sleep/physiology ; },
abstract = {Phase-amplitude coupling (PAC) is a fundamental neural phenomenon in which the phase of a slow oscillation modulates the amplitude of a faster oscillation. PAC has been implicated in various cognitive and clinical conditions, including Parkinson's disease, epilepsy, and depression. Traditional methods for quantifying PAC compute a single summary statistic over an entire time series, limiting their ability to capture dynamic fluctuations. Growing interest in time-varying PAC has led to methods that rely on windowed time-series analysis, but these approaches struggle to track rapid changes in coupling at single-sample resolution. To address this limitation, we propose a novel dynamic mutual information measure of PAC, leveraging a state-space modeling approach based on a Gamma generalized linear model (GLM). By introducing a Gauss-Markov process on the regression weights, our method enables dynamic, interpretable PAC estimation at each time point. We validate our approach using synthetic phase-amplitude coupled signals with time-varying coupling coefficients and demonstrate superior performance in smoothly tracking PAC over time and distinguishing coupled from uncoupled states. Additionally, we apply our technique to sleep EEG data, successfully identifying PAC during sleep spindles, which may serve as a biomarker for neurophysiological conditions such as Alzheimer's disease. Our findings suggest that this dynamic PAC measure is a powerful tool for neuroscientific and clinical research, with potential applications in real-time brain-computer interfaces and neurostimulation protocols.Clinical relevanceThis work demonstrates a new technique for quantifying time-varying electrophysiological coupling. This may allow for understanding transient neural dynamics in disease states and may help more robustly inform electrical stimulation protocols for patients with neurodegenerative disorders.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Electroencephalography/methods
Algorithms
*Signal Processing, Computer-Assisted
Linear Models
Sleep/physiology
RevDate: 2025-12-03
CmpDate: 2025-12-03
Signal extension with SeU-net for boosting the decoding performance of short-time SSVEP-based brain-computer interfaces.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-6.
Steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (SSVEP-BCIs) have greatly benefited the lives of patients. However, existing SSVEP recognition methods exhibit poor performance on short SSVEP signals. SSVEP recognition accuracy heavily depends on signal length, which increases as the signal length. From a novel data perspective, this study proposes a signal extension method called SeU-net without requiring calibration data from the target subject to improve the recognition performance of calibration-free methods for short-time SSVEP signals. SeU-net employs LSTM and contrastive learning to enhance feature extraction, converting signals from sample space to feature space, and then back to the sample space to realize signal extension. SeU-net is designed to focus only on signal extension in the temporal domain, without subject-specific feature extraction operations, resulting in strong cross-subject signal extension performance. The extensive experiments demonstrate that SeU-net significantly enhances the decoding performance of calibration-free methods for short-time SSVEP signals. By enabling more accurate decoding with shorter SSVEP signals, SeU-net holds the potential to advance the practical application of high-speed SSVEP-BCIs further.
Additional Links: PMID-41335991
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41335991,
year = {2025},
author = {Li, H and Xu, G and Zhang, S and Xie, J and Han, C and Wu, Q and Zhang, S},
title = {Signal extension with SeU-net for boosting the decoding performance of short-time SSVEP-based brain-computer interfaces.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-6},
doi = {10.1109/EMBC58623.2025.11253264},
pmid = {41335991},
issn = {2694-0604},
mesh = {*Brain-Computer Interfaces ; Humans ; *Evoked Potentials, Visual/physiology ; *Signal Processing, Computer-Assisted ; Electroencephalography/methods ; Algorithms ; },
abstract = {Steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (SSVEP-BCIs) have greatly benefited the lives of patients. However, existing SSVEP recognition methods exhibit poor performance on short SSVEP signals. SSVEP recognition accuracy heavily depends on signal length, which increases as the signal length. From a novel data perspective, this study proposes a signal extension method called SeU-net without requiring calibration data from the target subject to improve the recognition performance of calibration-free methods for short-time SSVEP signals. SeU-net employs LSTM and contrastive learning to enhance feature extraction, converting signals from sample space to feature space, and then back to the sample space to realize signal extension. SeU-net is designed to focus only on signal extension in the temporal domain, without subject-specific feature extraction operations, resulting in strong cross-subject signal extension performance. The extensive experiments demonstrate that SeU-net significantly enhances the decoding performance of calibration-free methods for short-time SSVEP signals. By enabling more accurate decoding with shorter SSVEP signals, SeU-net holds the potential to advance the practical application of high-speed SSVEP-BCIs further.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Brain-Computer Interfaces
Humans
*Evoked Potentials, Visual/physiology
*Signal Processing, Computer-Assisted
Electroencephalography/methods
Algorithms
RevDate: 2025-12-03
CmpDate: 2025-12-03
A Novel Levant's Differentiator-Based Descriptor for EEG-Based Motor Intent Decoding.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-6.
Motor intent (MI)-based brain-computer interfaces (BCIs) have been extensively studied to improve the performance and clinical realization of assistive robots for motor recovery in stroke patients. However, challenges arise in their low decoding performance. This can be attributed to the low spatial resolution and signal-to-noise ratio of electroencephalography (EEG), particularly in accurately deciphering hand movements, which reduces classification performance. Therefore, we have developed a novel feature extraction technique that exploits Levant's differentiators to extract distinct patterns in EEG signals and employs symmetric positive definite matrices (SPD) to effectively leverage the spatial-temporal properties of the EEG signal. Results from nine post-stroke patients and fifteen normal subjects showed an improved decoding accuracy of 99.16±0.64% and 99.30±0.69%, respectively in classifying twenty-four hand motor intents, significantly outperforming existing related methods. Thus, the proposed technique has the potential to greatly enhance the reliability and effectiveness of EEG-based control systems for post-stroke rehabilitation.Clinical Relevance- The outcome of this study can lead to better control of rehabilitation robots and improve the recovery speed of the stroke patients.
Additional Links: PMID-41335988
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41335988,
year = {2025},
author = {Kulwa, F and Sarwatt, DS and Asogbon, MG and Huang, J and Khushaba, RN and Oyemakinde, TT and Li, G and Samuel, OW and Li, H and Li, Y},
title = {A Novel Levant's Differentiator-Based Descriptor for EEG-Based Motor Intent Decoding.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-6},
doi = {10.1109/EMBC58623.2025.11253249},
pmid = {41335988},
issn = {2694-0604},
mesh = {Humans ; *Electroencephalography/methods ; *Brain-Computer Interfaces ; Stroke/physiopathology ; Male ; Signal Processing, Computer-Assisted ; Stroke Rehabilitation ; Algorithms ; Movement ; Female ; },
abstract = {Motor intent (MI)-based brain-computer interfaces (BCIs) have been extensively studied to improve the performance and clinical realization of assistive robots for motor recovery in stroke patients. However, challenges arise in their low decoding performance. This can be attributed to the low spatial resolution and signal-to-noise ratio of electroencephalography (EEG), particularly in accurately deciphering hand movements, which reduces classification performance. Therefore, we have developed a novel feature extraction technique that exploits Levant's differentiators to extract distinct patterns in EEG signals and employs symmetric positive definite matrices (SPD) to effectively leverage the spatial-temporal properties of the EEG signal. Results from nine post-stroke patients and fifteen normal subjects showed an improved decoding accuracy of 99.16±0.64% and 99.30±0.69%, respectively in classifying twenty-four hand motor intents, significantly outperforming existing related methods. Thus, the proposed technique has the potential to greatly enhance the reliability and effectiveness of EEG-based control systems for post-stroke rehabilitation.Clinical Relevance- The outcome of this study can lead to better control of rehabilitation robots and improve the recovery speed of the stroke patients.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Electroencephalography/methods
*Brain-Computer Interfaces
Stroke/physiopathology
Male
Signal Processing, Computer-Assisted
Stroke Rehabilitation
Algorithms
Movement
Female
RevDate: 2025-12-03
CmpDate: 2025-12-03
MI-CES: An explainable weak labelling approach to example selection for Motor Imagery BCI classification.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
Motor Imagery (MI) Brain Computer Interfaces (BCI) can be used to control assistive devices such as wheelchairs. These systems require a training period to get both the user and the machine to learn and adapt to each other, achieving an acceptable control accuracy. Previous systems have discovered that providing a form of feedback to the user about what the system thinks the user is thinking can increase the effect of training and increase both the control accuracy of the user and the classification accuracy of the BCI system. However, if this feedback is 'incorrect' due to the classifier behind the BCI system having a poor accuracy, this may cause the user to 'incorrectly' adapt to the feedback, providing the system with further poor examples of MI. In this paper, we propose MI-CES, an explainable 'example selection' approach based on the neuro-physiological principle of MI. We found that while using 2 classification techniques, we achieved a statistically significant increase in classification accuracy across 3 datasets that were comprised of both multi-participant and multi-session recordings.
Additional Links: PMID-41335965
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41335965,
year = {2025},
author = {Thomas, A and Cho, Y and Zhao, H and Carlson, T},
title = {MI-CES: An explainable weak labelling approach to example selection for Motor Imagery BCI classification.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11253265},
pmid = {41335965},
issn = {2694-0604},
mesh = {*Brain-Computer Interfaces ; Humans ; *Imagination/physiology ; Electroencephalography/methods ; Algorithms ; Movement/physiology ; },
abstract = {Motor Imagery (MI) Brain Computer Interfaces (BCI) can be used to control assistive devices such as wheelchairs. These systems require a training period to get both the user and the machine to learn and adapt to each other, achieving an acceptable control accuracy. Previous systems have discovered that providing a form of feedback to the user about what the system thinks the user is thinking can increase the effect of training and increase both the control accuracy of the user and the classification accuracy of the BCI system. However, if this feedback is 'incorrect' due to the classifier behind the BCI system having a poor accuracy, this may cause the user to 'incorrectly' adapt to the feedback, providing the system with further poor examples of MI. In this paper, we propose MI-CES, an explainable 'example selection' approach based on the neuro-physiological principle of MI. We found that while using 2 classification techniques, we achieved a statistically significant increase in classification accuracy across 3 datasets that were comprised of both multi-participant and multi-session recordings.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Brain-Computer Interfaces
Humans
*Imagination/physiology
Electroencephalography/methods
Algorithms
Movement/physiology
RevDate: 2025-12-03
CmpDate: 2025-12-03
A Deep Learning Framework for Multi-Source EEG Localization.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
Electroencephalography (EEG) provides millisecond-scale resolution of neural activity but struggles to accurately localize multiple concurrent sources, especially in spatially close regions. Classical linear inverse methods, such as MNE, sLORETA, and dSPM, address the ill-posed inverse problem through regularization but often exhibit a "single-source bias", suppressing smaller generators. This paper introduces a deep learning framework designed to robustly identify multiple sources of activity from short EEG segments. Our approach leverages a realistic simulation pipeline that systematically generates EEG recordings from physiologically plausible, distributed current sources. We train a convolutional neural network (ConvNET) on thousands of such simulations, ensuring generalization by using a forward model distinct from that of classical solvers, thereby minimizing the risk of an "inverse crime". We evaluate our ConvNet against nine well-established inverse solvers (MNE, dSPM, sLORETA, eLORETA, LORETA, LAURA, and depth-weighted variants). Benchmarking across multiple synthetic test scenarios demonstrates that our method consistently outperforms traditional solvers, particularly in resolving closely spaced sources, while maintaining or improving accuracy for single-source cases. These results highlight the potential of deep learning to overcome biases in EEG source imaging, offering a more reliable approach for multi-source localization.Clinical relevance- By leveraging deep learning, our approach improves localization accuracy, particularly in closely spaced or deep brain sources, potentially enhancing presurgical planning, brain-computer interfaces, and real-time neurofeed-back applications.
Additional Links: PMID-41335962
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41335962,
year = {2025},
author = {Buda, C and Gambosi, B and Toschi, N and Astolfi, L},
title = {A Deep Learning Framework for Multi-Source EEG Localization.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11253252},
pmid = {41335962},
issn = {2694-0604},
mesh = {*Electroencephalography/methods ; *Deep Learning ; Humans ; Algorithms ; Signal Processing, Computer-Assisted ; Neural Networks, Computer ; Brain/physiology ; },
abstract = {Electroencephalography (EEG) provides millisecond-scale resolution of neural activity but struggles to accurately localize multiple concurrent sources, especially in spatially close regions. Classical linear inverse methods, such as MNE, sLORETA, and dSPM, address the ill-posed inverse problem through regularization but often exhibit a "single-source bias", suppressing smaller generators. This paper introduces a deep learning framework designed to robustly identify multiple sources of activity from short EEG segments. Our approach leverages a realistic simulation pipeline that systematically generates EEG recordings from physiologically plausible, distributed current sources. We train a convolutional neural network (ConvNET) on thousands of such simulations, ensuring generalization by using a forward model distinct from that of classical solvers, thereby minimizing the risk of an "inverse crime". We evaluate our ConvNet against nine well-established inverse solvers (MNE, dSPM, sLORETA, eLORETA, LORETA, LAURA, and depth-weighted variants). Benchmarking across multiple synthetic test scenarios demonstrates that our method consistently outperforms traditional solvers, particularly in resolving closely spaced sources, while maintaining or improving accuracy for single-source cases. These results highlight the potential of deep learning to overcome biases in EEG source imaging, offering a more reliable approach for multi-source localization.Clinical relevance- By leveraging deep learning, our approach improves localization accuracy, particularly in closely spaced or deep brain sources, potentially enhancing presurgical planning, brain-computer interfaces, and real-time neurofeed-back applications.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Electroencephalography/methods
*Deep Learning
Humans
Algorithms
Signal Processing, Computer-Assisted
Neural Networks, Computer
Brain/physiology
RevDate: 2025-12-03
CmpDate: 2025-12-03
Enhancing Cross-subject Auditory Attention Detection with Contrastive Learning for EEG Feature Refinement.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-4.
Electroencephalography (EEG)-based auditory attention detection (AAD) plays a crucial role in recent auditory brain-computer interface applications. However, the performance of AAD models in cross-subject tasks tends to be significantly degraded due to the excessive differences in EEG features across subjects. To address this challenge, we proposed a novel framework, AAD-ContrastNet, that incorporated contrastive learning to refine the temporal features from EEG and reduce the variance of EEG features across subjects. AAD-ContrastNet consists of four main components: (a) an attention-based EEG encoder; (b) a contrastive-learning-based EEG encoder; (c) a feature refinement module; and (d) a classifier. T-SNE visualization results show that combining contrastive learning with cross-attention feature refinement significantly improves the generalization of extracted EEG features. By comparing with SOTA models (i.e., DenseNet-3D and DARNet), we validate the significant effect of AAD-ContrastNet in improving cross-subject decoding accuracy, highlighting its potential in enhancing the robustness and generalization of EEG-based AAD systems.Clinical Relevance- This study demonstrates the potential of contrastive learning in mitigating cross-subject performance degradation, providing a solid foundation for applying generalized auditory brain-computer interface systems.
Additional Links: PMID-41335905
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41335905,
year = {2025},
author = {Ding, Y and Wang, X and Chen, F},
title = {Enhancing Cross-subject Auditory Attention Detection with Contrastive Learning for EEG Feature Refinement.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-4},
doi = {10.1109/EMBC58623.2025.11252602},
pmid = {41335905},
issn = {2694-0604},
mesh = {*Electroencephalography/methods ; Humans ; *Attention/physiology ; Brain-Computer Interfaces ; Algorithms ; Signal Processing, Computer-Assisted ; *Machine Learning ; },
abstract = {Electroencephalography (EEG)-based auditory attention detection (AAD) plays a crucial role in recent auditory brain-computer interface applications. However, the performance of AAD models in cross-subject tasks tends to be significantly degraded due to the excessive differences in EEG features across subjects. To address this challenge, we proposed a novel framework, AAD-ContrastNet, that incorporated contrastive learning to refine the temporal features from EEG and reduce the variance of EEG features across subjects. AAD-ContrastNet consists of four main components: (a) an attention-based EEG encoder; (b) a contrastive-learning-based EEG encoder; (c) a feature refinement module; and (d) a classifier. T-SNE visualization results show that combining contrastive learning with cross-attention feature refinement significantly improves the generalization of extracted EEG features. By comparing with SOTA models (i.e., DenseNet-3D and DARNet), we validate the significant effect of AAD-ContrastNet in improving cross-subject decoding accuracy, highlighting its potential in enhancing the robustness and generalization of EEG-based AAD systems.Clinical Relevance- This study demonstrates the potential of contrastive learning in mitigating cross-subject performance degradation, providing a solid foundation for applying generalized auditory brain-computer interface systems.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Electroencephalography/methods
Humans
*Attention/physiology
Brain-Computer Interfaces
Algorithms
Signal Processing, Computer-Assisted
*Machine Learning
RevDate: 2025-12-03
CmpDate: 2025-12-03
Beyond Frequency: Leveraging Spatial Features in SSVEP-Based Brain-Computer Interfaces with Visual Animations.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-5.
Current research on steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) predominantly focuses on utilizing the frequency- and phase-locking characteristics of SSVEP for encoding purposes. In this study, we propose an innovative paradigm wherein SSVEP serves as a marker, integrated with different types of motion animations to identify distinct neural processing pathways associated with these animations. This approach enables the classification of SSVEP-based BCIs without relying on frequency features. We designed six distinct animations corresponding to six behaviors commonly observed in daily life. Each animation was tagged with a uniform 6 Hz stimulus frequency, forming a six-target classification task. Offline testing was conducted with 10 participants. Despite identical frequency components, significant differences in spatial distribution corresponding to the animations were observed, likely due to the behavioral variations in the animations. Classification analysis demonstrated an accuracy of 0.93 within a 6-second window, validating the practical feasibility of this approach. This paradigm offers a novel direction for the advancement of SSVEP-based BCIs, potentially enabling the integration of multi-sensory information.
Additional Links: PMID-41335889
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41335889,
year = {2025},
author = {Sun, Y and Zhang, Z and Qi, Q and Li, X and Sun, J and Zhang, K and Zhuang, J and Chen, X and Gao, X},
title = {Beyond Frequency: Leveraging Spatial Features in SSVEP-Based Brain-Computer Interfaces with Visual Animations.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-5},
doi = {10.1109/EMBC58623.2025.11254745},
pmid = {41335889},
issn = {2694-0604},
mesh = {*Brain-Computer Interfaces ; Humans ; *Evoked Potentials, Visual/physiology ; Male ; Female ; Electroencephalography/methods ; Adult ; Young Adult ; },
abstract = {Current research on steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) predominantly focuses on utilizing the frequency- and phase-locking characteristics of SSVEP for encoding purposes. In this study, we propose an innovative paradigm wherein SSVEP serves as a marker, integrated with different types of motion animations to identify distinct neural processing pathways associated with these animations. This approach enables the classification of SSVEP-based BCIs without relying on frequency features. We designed six distinct animations corresponding to six behaviors commonly observed in daily life. Each animation was tagged with a uniform 6 Hz stimulus frequency, forming a six-target classification task. Offline testing was conducted with 10 participants. Despite identical frequency components, significant differences in spatial distribution corresponding to the animations were observed, likely due to the behavioral variations in the animations. Classification analysis demonstrated an accuracy of 0.93 within a 6-second window, validating the practical feasibility of this approach. This paradigm offers a novel direction for the advancement of SSVEP-based BCIs, potentially enabling the integration of multi-sensory information.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Brain-Computer Interfaces
Humans
*Evoked Potentials, Visual/physiology
Male
Female
Electroencephalography/methods
Adult
Young Adult
RevDate: 2025-12-03
CmpDate: 2025-12-03
EEG-Translator: A Cross-Modality Framework for Subject-Specific EEG and Voice Reconstruction from Imagined Speech.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
Non-invasive brain-computer interfaces (BCIs) offer the potential to enable communication for individuals with speech impairments by decoding neural signals through speech-related electroencephalography (EEG) signals. Beyond domain-specific speech EEG decoding, generative approaches that enable cross-domain reconstruction are needed to enhance the overall system performance. Here, we propose a cross-modal EEG translation framework that reconstructs overt speech EEG from imagined speech EEG, for subject-specific speech synthesis. Our approach integrates a diffusion-based model with GAN training to enhance cross-domain EEG reconstruction by preserving both EEG class information and its time-frequency domain properties. In classification tasks, the reconstructed EEG improves class decoding accuracy by 6.2% over the original imagined EEG. Additionally, EEG reconstruction was trained not only on the EEG signal itself but also by incorporating spectrogram-based features, leveraging a fusion of spatial and spectral losses to preserve EEG properties. Beyond EEG reconstruction, category-wise analysis across a multi-speech paradigm dataset reveals variations in decoding performance, offering linguistic insights crucial for the advancement of speech BCI systems. Our findings highlight the potential of diffusion-driven EEG translation in speech BCIs, emphasizing the importance of integrating deep learning methodologies with linguistic insights for improved neural signal reconstruction and interpretation.
Additional Links: PMID-41335877
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41335877,
year = {2025},
author = {Lee, SH and Lee, SH and Lee, SW},
title = {EEG-Translator: A Cross-Modality Framework for Subject-Specific EEG and Voice Reconstruction from Imagined Speech.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11254826},
pmid = {41335877},
issn = {2694-0604},
mesh = {Humans ; *Electroencephalography/methods ; *Brain-Computer Interfaces ; *Speech/physiology ; *Voice/physiology ; Signal Processing, Computer-Assisted ; *Imagination/physiology ; Algorithms ; },
abstract = {Non-invasive brain-computer interfaces (BCIs) offer the potential to enable communication for individuals with speech impairments by decoding neural signals through speech-related electroencephalography (EEG) signals. Beyond domain-specific speech EEG decoding, generative approaches that enable cross-domain reconstruction are needed to enhance the overall system performance. Here, we propose a cross-modal EEG translation framework that reconstructs overt speech EEG from imagined speech EEG, for subject-specific speech synthesis. Our approach integrates a diffusion-based model with GAN training to enhance cross-domain EEG reconstruction by preserving both EEG class information and its time-frequency domain properties. In classification tasks, the reconstructed EEG improves class decoding accuracy by 6.2% over the original imagined EEG. Additionally, EEG reconstruction was trained not only on the EEG signal itself but also by incorporating spectrogram-based features, leveraging a fusion of spatial and spectral losses to preserve EEG properties. Beyond EEG reconstruction, category-wise analysis across a multi-speech paradigm dataset reveals variations in decoding performance, offering linguistic insights crucial for the advancement of speech BCI systems. Our findings highlight the potential of diffusion-driven EEG translation in speech BCIs, emphasizing the importance of integrating deep learning methodologies with linguistic insights for improved neural signal reconstruction and interpretation.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Electroencephalography/methods
*Brain-Computer Interfaces
*Speech/physiology
*Voice/physiology
Signal Processing, Computer-Assisted
*Imagination/physiology
Algorithms
RevDate: 2025-12-03
CmpDate: 2025-12-03
Flexible-Rigid Bonding of Silicon Based Neural Interface for Deep Brain LFP Recording.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-4.
Microfabricated silicon neural probes have become the dominant technology in the field of implantable brain-computer interfaces. Mechanical bonding, electroplating, template printing, flip-chip bonding, and welding are prevalent methods for electrode packaging in preparation; however, these techniques often present challenges such as complex processes, elevated temperatures, or increased electrode thickness. We proposed a novel flexible-rigid bonding method for the silicon based neural interface, which markedly reduced the bonding volume compared with the traditional board to board connector. It simplified the assembly process of silicon probes, increased the electrode integration density and facilitated the assembly of the probe and flexible cable. This approach enables the flexible implantation of silicon electrodes in deep brain regions for recording neural signals.
Additional Links: PMID-41335876
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41335876,
year = {2025},
author = {Wang, A and Zhang, Y and Zhan, G and Zhang, L and Kang, X},
title = {Flexible-Rigid Bonding of Silicon Based Neural Interface for Deep Brain LFP Recording.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-4},
doi = {10.1109/EMBC58623.2025.11254814},
pmid = {41335876},
issn = {2694-0604},
mesh = {*Silicon/chemistry ; *Brain-Computer Interfaces ; *Brain/physiology ; Electrodes, Implanted ; Humans ; Equipment Design ; Animals ; },
abstract = {Microfabricated silicon neural probes have become the dominant technology in the field of implantable brain-computer interfaces. Mechanical bonding, electroplating, template printing, flip-chip bonding, and welding are prevalent methods for electrode packaging in preparation; however, these techniques often present challenges such as complex processes, elevated temperatures, or increased electrode thickness. We proposed a novel flexible-rigid bonding method for the silicon based neural interface, which markedly reduced the bonding volume compared with the traditional board to board connector. It simplified the assembly process of silicon probes, increased the electrode integration density and facilitated the assembly of the probe and flexible cable. This approach enables the flexible implantation of silicon electrodes in deep brain regions for recording neural signals.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Silicon/chemistry
*Brain-Computer Interfaces
*Brain/physiology
Electrodes, Implanted
Humans
Equipment Design
Animals
RevDate: 2025-12-03
CmpDate: 2025-12-03
A Novel Approach to Improve SSVEP-BCI Performance Through Neurofeedback Training.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-6.
Brain-Computer interface (BCI), which translates neural activities into commands for external devices, holds significant promise for clinical rehabilitation and assisted movement for individuals with motor disabilities. Among various BCI paradigms, the steady-state visual evoked potential (SSVEP) based BCI garnered considerable attention due to its relatively stable and high-speed communication capabilities. However, a notable portion of the population, referred to as BCI illiteracy, struggles to effectively control BCI systems due to their inability to generate or modulate the neural patterns required for interaction. To address this issue, we proposed a user-centered approach using neurofeedback training (NFT) to improve individual's performance on SSVEP-BCI. As a result, after a five-day training period, significant improvements in SSVEP-BCI performance were only observed in the training group rather than the control group without training. Notably, some subjects initially determined as BCI-illiterate also gained effective control of the BCI system after training. Further analysis revealed that the improvement of SSVEP-BCI performance had a close link with increased power and inter-trial phase coherence of the SSVEP response, indicating that NFT successfully strengthened the user's task-related neural responses. These findings highlight the potential of NFT as a user-centered intervention to improve BCI control performance, offering a promising pathway to address BCI illiteracy and promote the broader application of BCI systems.Clinical Relevance- This study proposes an effective approach to enhancing the controllability of SSVEP-BCI systems, addressing the critical issue of individual control limitations. The developed method demonstrates significant clinical potential for promoting SSVEP-BCI applications, particularly in facilitating communication and device control for patients with severe motor impairments, such as amyotrophic lateral sclerosis (ALS) and locked-in syndrome (LIS).
Additional Links: PMID-41335863
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41335863,
year = {2025},
author = {Li, M and Yao, Y and Dong, B and Wang, K and Yu, H and Xu, M and Ming, D},
title = {A Novel Approach to Improve SSVEP-BCI Performance Through Neurofeedback Training.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-6},
doi = {10.1109/EMBC58623.2025.11254803},
pmid = {41335863},
issn = {2694-0604},
mesh = {Humans ; *Brain-Computer Interfaces ; *Neurofeedback/methods ; *Evoked Potentials, Visual/physiology ; Male ; Adult ; Electroencephalography ; Female ; },
abstract = {Brain-Computer interface (BCI), which translates neural activities into commands for external devices, holds significant promise for clinical rehabilitation and assisted movement for individuals with motor disabilities. Among various BCI paradigms, the steady-state visual evoked potential (SSVEP) based BCI garnered considerable attention due to its relatively stable and high-speed communication capabilities. However, a notable portion of the population, referred to as BCI illiteracy, struggles to effectively control BCI systems due to their inability to generate or modulate the neural patterns required for interaction. To address this issue, we proposed a user-centered approach using neurofeedback training (NFT) to improve individual's performance on SSVEP-BCI. As a result, after a five-day training period, significant improvements in SSVEP-BCI performance were only observed in the training group rather than the control group without training. Notably, some subjects initially determined as BCI-illiterate also gained effective control of the BCI system after training. Further analysis revealed that the improvement of SSVEP-BCI performance had a close link with increased power and inter-trial phase coherence of the SSVEP response, indicating that NFT successfully strengthened the user's task-related neural responses. These findings highlight the potential of NFT as a user-centered intervention to improve BCI control performance, offering a promising pathway to address BCI illiteracy and promote the broader application of BCI systems.Clinical Relevance- This study proposes an effective approach to enhancing the controllability of SSVEP-BCI systems, addressing the critical issue of individual control limitations. The developed method demonstrates significant clinical potential for promoting SSVEP-BCI applications, particularly in facilitating communication and device control for patients with severe motor impairments, such as amyotrophic lateral sclerosis (ALS) and locked-in syndrome (LIS).},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Brain-Computer Interfaces
*Neurofeedback/methods
*Evoked Potentials, Visual/physiology
Male
Adult
Electroencephalography
Female
RevDate: 2025-12-03
CmpDate: 2025-12-03
Empowering Accessibility: Human-Centered Approach to a BCI Home Control for Impaired People.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
Brain-Computer Interfaces (BCIs) have shown significant potential for individuals with motor impairments, either by improving physiotherapy treatments or by enabling to perform simple tasks, autonomously. However, much of this progress remains confined to controlled laboratory environments. This study aims to develop a BCI-controlled interface, for real-life scenario, tailored to allow individuals with Locked-In Syndrome (LIS) to interact with their home environment. To ensure system usability, a Human-Centered Design (HCD) approach was adopted prioritizing end-user needs. The interface control system was tested using a BITalino for Electroencephalogram (EEG) acquisition. Preliminary results demonstrated that professionals recognize the system's potential, highlighting the importance of real-time feedback, and design simplicity features to minimize user fatigue and improve control accuracy.Clinical Relevance-This interdisciplinary methodology bridges the gap between assistive technologies and the user needs, promoting autonomy and communication with a BCI-controlled interface for real home interaction.
Additional Links: PMID-41335841
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41335841,
year = {2025},
author = {Ramos, J and Silva, S and Marques, B and Pais-Vieira, M and Stevenson, A and Bras, S},
title = {Empowering Accessibility: Human-Centered Approach to a BCI Home Control for Impaired People.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11253641},
pmid = {41335841},
issn = {2694-0604},
mesh = {Humans ; *Brain-Computer Interfaces ; Electroencephalography ; User-Computer Interface ; *Locked-In Syndrome/physiopathology/rehabilitation ; Self-Help Devices ; Male ; },
abstract = {Brain-Computer Interfaces (BCIs) have shown significant potential for individuals with motor impairments, either by improving physiotherapy treatments or by enabling to perform simple tasks, autonomously. However, much of this progress remains confined to controlled laboratory environments. This study aims to develop a BCI-controlled interface, for real-life scenario, tailored to allow individuals with Locked-In Syndrome (LIS) to interact with their home environment. To ensure system usability, a Human-Centered Design (HCD) approach was adopted prioritizing end-user needs. The interface control system was tested using a BITalino for Electroencephalogram (EEG) acquisition. Preliminary results demonstrated that professionals recognize the system's potential, highlighting the importance of real-time feedback, and design simplicity features to minimize user fatigue and improve control accuracy.Clinical Relevance-This interdisciplinary methodology bridges the gap between assistive technologies and the user needs, promoting autonomy and communication with a BCI-controlled interface for real home interaction.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Brain-Computer Interfaces
Electroencephalography
User-Computer Interface
*Locked-In Syndrome/physiopathology/rehabilitation
Self-Help Devices
Male
RevDate: 2025-12-03
CmpDate: 2025-12-03
Boosting Spatial Properties of Single-Flicker SSVEP via Laplacian Electrodes.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-4.
Spatially-encoded steady-state visual evoked potentials (SSVEP) acquired by electroencephalography (EEG) are extensively utilized in brain-computer interface and neuroscience research. However, EEG suffers from low spatial resolution due to volume conduction effects. To tackle this problem, this study developed a bipolar concentric ring electrode (CRE) for collecting high-resolution Laplacian EEG (LEEG), which was validated through a tank simulation experiment and a human experiment. The tank simulation experiment confirmed its high spatial resolution, and the results showed that LEEG acquired by CRE achieved 2.35 times greater spatial attenuation than EEG. Meanwhile, the human experiment designed a single-flicker SSVEP paradigm with stimuli positioned at different visual field orientations. The results revealed that LEEG had lower inter-channel similarity than EEG, with average coefficients of 0.63 for EEG and 0.46 for LEEG (p<0.01). Topographical analysis further demonstrated that CRE sharpened the spatial features of spatially-encoded SSVEPs, and indicated a clear visual hemifield dominance phenomenon. This study effectively enhances the spatial properties of SSVEP and holds promise for advancing high-resolution LEEG.
Additional Links: PMID-41335820
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41335820,
year = {2025},
author = {Luo, R and Zheng, C and Ding, R and Shi, T and Li, D and Xiao, X and Huang, Y and Xu, M and Ming, D},
title = {Boosting Spatial Properties of Single-Flicker SSVEP via Laplacian Electrodes.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-4},
doi = {10.1109/EMBC58623.2025.11253662},
pmid = {41335820},
issn = {2694-0604},
mesh = {Humans ; *Evoked Potentials, Visual/physiology ; *Electroencephalography/instrumentation/methods ; Electrodes ; Male ; Adult ; Female ; Brain-Computer Interfaces ; Photic Stimulation ; },
abstract = {Spatially-encoded steady-state visual evoked potentials (SSVEP) acquired by electroencephalography (EEG) are extensively utilized in brain-computer interface and neuroscience research. However, EEG suffers from low spatial resolution due to volume conduction effects. To tackle this problem, this study developed a bipolar concentric ring electrode (CRE) for collecting high-resolution Laplacian EEG (LEEG), which was validated through a tank simulation experiment and a human experiment. The tank simulation experiment confirmed its high spatial resolution, and the results showed that LEEG acquired by CRE achieved 2.35 times greater spatial attenuation than EEG. Meanwhile, the human experiment designed a single-flicker SSVEP paradigm with stimuli positioned at different visual field orientations. The results revealed that LEEG had lower inter-channel similarity than EEG, with average coefficients of 0.63 for EEG and 0.46 for LEEG (p<0.01). Topographical analysis further demonstrated that CRE sharpened the spatial features of spatially-encoded SSVEPs, and indicated a clear visual hemifield dominance phenomenon. This study effectively enhances the spatial properties of SSVEP and holds promise for advancing high-resolution LEEG.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Evoked Potentials, Visual/physiology
*Electroencephalography/instrumentation/methods
Electrodes
Male
Adult
Female
Brain-Computer Interfaces
Photic Stimulation
RevDate: 2025-12-03
CmpDate: 2025-12-03
Auditory Steady-State Responses and the Effects of Interaural Decoherence and Presence of Vision.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
The Auditory Steady-State Response (ASSR) is a periodic neural response used to detect speech and hearing loss, and it is also used as a Brain-Computer Interface paradigm. Our paper identifies two key factors that impact the quality and consistency of the ASSR. First is the interaural decoherence, the timing and intensity of sounds arriving at two ears produced by speakers in reverberant environments. Second is the impact of vision on modulating auditory perception and spatial attention, which could potentially influence the neural synchronisation of the response. To demonstrate this, we conducted an experiment on 26 healthy participants to examine the effects of interaural decoherence, by comparing the frequency responses between speakers and earphones, and the presence of vision, by comparing being blindfolded and non-blindfolded, on the ASSR. This study demonstrates that earphones elicit more consistent and reliable ASSRs compared to speakers, emphasising the detrimental effects of interaural decoherence from speaker-based sound delivery on ASSRs. Furthermore, we found that the response is more biased to one side in the absence of vision compared to the presence of vision. This study highlights the importance of using rooms with anechoic properties or less reverberation when using speakers to ensure the consistency and clarity of the response. Future ASSR paradigms should also consider fixating on a target to elicit less bias in ASSR and more accurate spatial features.
Additional Links: PMID-41335778
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41335778,
year = {2025},
author = {Nguyen, MTD and Zhu, HY and Burnham, M and Sun, H and Zhu, Q and Nguyen, V and Brown, S and Wu, E and Jin, C and Lin, CT},
title = {Auditory Steady-State Responses and the Effects of Interaural Decoherence and Presence of Vision.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11254438},
pmid = {41335778},
issn = {2694-0604},
mesh = {Humans ; Male ; Female ; Adult ; Young Adult ; *Vision, Ocular/physiology ; *Auditory Perception/physiology ; Acoustic Stimulation ; *Evoked Potentials, Auditory/physiology ; Electroencephalography/methods ; },
abstract = {The Auditory Steady-State Response (ASSR) is a periodic neural response used to detect speech and hearing loss, and it is also used as a Brain-Computer Interface paradigm. Our paper identifies two key factors that impact the quality and consistency of the ASSR. First is the interaural decoherence, the timing and intensity of sounds arriving at two ears produced by speakers in reverberant environments. Second is the impact of vision on modulating auditory perception and spatial attention, which could potentially influence the neural synchronisation of the response. To demonstrate this, we conducted an experiment on 26 healthy participants to examine the effects of interaural decoherence, by comparing the frequency responses between speakers and earphones, and the presence of vision, by comparing being blindfolded and non-blindfolded, on the ASSR. This study demonstrates that earphones elicit more consistent and reliable ASSRs compared to speakers, emphasising the detrimental effects of interaural decoherence from speaker-based sound delivery on ASSRs. Furthermore, we found that the response is more biased to one side in the absence of vision compared to the presence of vision. This study highlights the importance of using rooms with anechoic properties or less reverberation when using speakers to ensure the consistency and clarity of the response. Future ASSR paradigms should also consider fixating on a target to elicit less bias in ASSR and more accurate spatial features.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
Male
Female
Adult
Young Adult
*Vision, Ocular/physiology
*Auditory Perception/physiology
Acoustic Stimulation
*Evoked Potentials, Auditory/physiology
Electroencephalography/methods
RevDate: 2025-12-03
CmpDate: 2025-12-03
Intended and Non-Volitional Knee Joint Movements Elicit Distinct Functional Brain Networks.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-6.
Motor execution induces significant alterations in the dynamics of electroencephalography (EEG) signals, which are crucial for assessing rehabilitation, brain plasticity, and brain-computer interface (BCI) applications. While traditional analyses have primarily focused on power spectral changes, recent advancements incorporate non-linear indices to uncover previously undetected characteristics of brain dynamics.Network analysis provides a powerful framework to examine the structural organization and communication within complex systems composed of interconnected neural units. This study investigates the structural properties functional networks formed during both active and resting states under different knee joint flexion tasks. These movements were performed under three physical demand conditions, including an assisted, non-volitional movement.Functional networks were constructed from EEG analysis over 16 electrodes for the μ, β, and γ frequency bands, and key network metrics were estimated, including input and output node degree centrality, clustering coefficient, and betweenness centrality. Results indicate that motor execution leads to a reduction in overall network connectivity while enhancing communication efficiency. Additionally, networks in the γ and μ bands were more involved in voluntary movement, whereas the β band played a predominant role in assisted movements. The spatial distribution of electrodes contributing to these networks differed between voluntary and assisted conditions, suggesting distinct underlying neural mechanisms rather than a simple linear modulation of connectivity.
Additional Links: PMID-41335768
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41335768,
year = {2025},
author = {Morales-Magallon, F and Bojorges-Valdez, E},
title = {Intended and Non-Volitional Knee Joint Movements Elicit Distinct Functional Brain Networks.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-6},
doi = {10.1109/EMBC58623.2025.11254479},
pmid = {41335768},
issn = {2694-0604},
mesh = {Humans ; *Knee Joint/physiology ; Movement/physiology ; Electroencephalography/methods ; *Brain/physiology ; Male ; Adult ; Brain-Computer Interfaces ; *Nerve Net/physiology ; Female ; },
abstract = {Motor execution induces significant alterations in the dynamics of electroencephalography (EEG) signals, which are crucial for assessing rehabilitation, brain plasticity, and brain-computer interface (BCI) applications. While traditional analyses have primarily focused on power spectral changes, recent advancements incorporate non-linear indices to uncover previously undetected characteristics of brain dynamics.Network analysis provides a powerful framework to examine the structural organization and communication within complex systems composed of interconnected neural units. This study investigates the structural properties functional networks formed during both active and resting states under different knee joint flexion tasks. These movements were performed under three physical demand conditions, including an assisted, non-volitional movement.Functional networks were constructed from EEG analysis over 16 electrodes for the μ, β, and γ frequency bands, and key network metrics were estimated, including input and output node degree centrality, clustering coefficient, and betweenness centrality. Results indicate that motor execution leads to a reduction in overall network connectivity while enhancing communication efficiency. Additionally, networks in the γ and μ bands were more involved in voluntary movement, whereas the β band played a predominant role in assisted movements. The spatial distribution of electrodes contributing to these networks differed between voluntary and assisted conditions, suggesting distinct underlying neural mechanisms rather than a simple linear modulation of connectivity.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Knee Joint/physiology
Movement/physiology
Electroencephalography/methods
*Brain/physiology
Male
Adult
Brain-Computer Interfaces
*Nerve Net/physiology
Female
RevDate: 2025-12-03
CmpDate: 2025-12-03
DC-FFNet: Dual Channel Feature Fusion Network for Real-Time Asynchronous Signal Analysis.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
Steady-state visual evoked potentials (SSVEP) are widely used in brain-computer interface (BCI) systems due to their high accuracy and fast response performance and are commonly used for the control of a variety of external devices. However, existing SSVEP signal classification methods still face the problems of insufficient recognition accuracy and poor real-time performance in complex dynamic scenes. Therefore, this study proposes a new SSVEP signal classification model Dual Channel Feature Fusion Network (DC-FFNet), and constructs a real-time control framework by combining it with an asynchronous control mechanism. DC-FFNet is a novel model for SSVEP signal classification based on a dual channel architecture. It incorporates a multi-head self-attention mechanism to capture global features, enhance local features, and fuse multimodal information, significantly improving classification accuracy. The classification accuracy of DC-FFNet reaches 91.80% on the SSVEP_SANDIEGO Dataset and 90.93% on the Self-recorded Dataset, which both exceed the existing models. In addition, the real-time framework that incorporates an asynchronous control mechanism effectively reduces the response time and improves the information transfer rate of the system (up to 128.66 bits/min). This research is expected to provide an efficient and flexible SSVEP signal processing scheme for multi-device asynchronous collaborative control systems assisting people with disabilities, balancing performance and real-time, which is of great significance for BCI technology.
Additional Links: PMID-41335749
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41335749,
year = {2025},
author = {Sun, Y and You, Z and Sun, D and Huang, Y and Wu, Q and Pan, J},
title = {DC-FFNet: Dual Channel Feature Fusion Network for Real-Time Asynchronous Signal Analysis.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11254486},
pmid = {41335749},
issn = {2694-0604},
mesh = {Humans ; *Evoked Potentials, Visual/physiology ; Brain-Computer Interfaces ; *Signal Processing, Computer-Assisted ; *Electroencephalography/methods ; Algorithms ; Computer Systems ; },
abstract = {Steady-state visual evoked potentials (SSVEP) are widely used in brain-computer interface (BCI) systems due to their high accuracy and fast response performance and are commonly used for the control of a variety of external devices. However, existing SSVEP signal classification methods still face the problems of insufficient recognition accuracy and poor real-time performance in complex dynamic scenes. Therefore, this study proposes a new SSVEP signal classification model Dual Channel Feature Fusion Network (DC-FFNet), and constructs a real-time control framework by combining it with an asynchronous control mechanism. DC-FFNet is a novel model for SSVEP signal classification based on a dual channel architecture. It incorporates a multi-head self-attention mechanism to capture global features, enhance local features, and fuse multimodal information, significantly improving classification accuracy. The classification accuracy of DC-FFNet reaches 91.80% on the SSVEP_SANDIEGO Dataset and 90.93% on the Self-recorded Dataset, which both exceed the existing models. In addition, the real-time framework that incorporates an asynchronous control mechanism effectively reduces the response time and improves the information transfer rate of the system (up to 128.66 bits/min). This research is expected to provide an efficient and flexible SSVEP signal processing scheme for multi-device asynchronous collaborative control systems assisting people with disabilities, balancing performance and real-time, which is of great significance for BCI technology.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
*Evoked Potentials, Visual/physiology
Brain-Computer Interfaces
*Signal Processing, Computer-Assisted
*Electroencephalography/methods
Algorithms
Computer Systems
RevDate: 2025-12-03
CmpDate: 2025-12-03
Word-specific properties affect classification performance in Brain Computer Interfaces for decoding imagined speech from EEG.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-5.
Decoding imagined speech from brain signals has become one of the most significant fields for BCI applications. One of the current challenges that researchers face is an insufficient classification performance for real-world applications. In this study, we investigate for the first time the effect of word-specific properties known to modulate brain signals on classification performance. We chose 16 word prompts that vary in age of acquisition (AoA) and word frequency, two word-specific properties known to modulate speech processing, and investigated their classification performance for speech imagery (SI) trials compared to the idle state using a random forest classifier and 10-fold cross-validation. We found highly significant effects of AoA, word frequency and their interaction on classification performance. Our results yield evidence that the word frequency and AoA of word prompts used in SI paradigms significantly influence the classification accuracy in a BCI application when SI trials are compared to the idle state.Relevance - Choosing word prompts with optimal properties can significantly improve classification performance in BCI applications.
Additional Links: PMID-41335716
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41335716,
year = {2025},
author = {Turk, S and Padfield, N and Mujahid, K and Camilleri, T and Camilleri, K},
title = {Word-specific properties affect classification performance in Brain Computer Interfaces for decoding imagined speech from EEG.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-5},
doi = {10.1109/EMBC58623.2025.11254906},
pmid = {41335716},
issn = {2694-0604},
mesh = {*Brain-Computer Interfaces ; Humans ; *Electroencephalography/methods ; *Speech/physiology ; *Imagination/physiology ; Male ; Adult ; Female ; Young Adult ; *Brain/physiology ; },
abstract = {Decoding imagined speech from brain signals has become one of the most significant fields for BCI applications. One of the current challenges that researchers face is an insufficient classification performance for real-world applications. In this study, we investigate for the first time the effect of word-specific properties known to modulate brain signals on classification performance. We chose 16 word prompts that vary in age of acquisition (AoA) and word frequency, two word-specific properties known to modulate speech processing, and investigated their classification performance for speech imagery (SI) trials compared to the idle state using a random forest classifier and 10-fold cross-validation. We found highly significant effects of AoA, word frequency and their interaction on classification performance. Our results yield evidence that the word frequency and AoA of word prompts used in SI paradigms significantly influence the classification accuracy in a BCI application when SI trials are compared to the idle state.Relevance - Choosing word prompts with optimal properties can significantly improve classification performance in BCI applications.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Brain-Computer Interfaces
Humans
*Electroencephalography/methods
*Speech/physiology
*Imagination/physiology
Male
Adult
Female
Young Adult
*Brain/physiology
RevDate: 2025-12-03
CmpDate: 2025-12-03
A Novel Multi-Stage Algorithm for Real-Time Detection and Correction of Ocular Artifacts in EEG: A Calibration-Free Approach.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-7.
Ocular artifacts, particularly blinks, significantly affect the integrity of electroencephalographic (EEG) signals, posing a challenge for real-time applications. Traditional correction methods often require a calibration phase or additional electrooculogram (EOG) channels, limiting their applicability in mobile and real-world settings. This study presents a novel detection and correction method, designed for online ocular artifact correction without the need for prior calibration: the CFo-CLEAN. The proposed method integrates an Enhanced Adaptive Data-driven Algorithm (eADA) for real-time identification and correction of ocular artifacts directly from EEG signals. Unlike conventional approaches, this implementation adapts dynamically to ongoing EEG variations, enhancing flexibility and performance. The study evaluates the CFo-CLEAN method using EEG data recorded from 38 participants during real-world driving scenarios. Performance comparisons were conducted against established correction techniques, including Independent Component Analysis (ICA), regression-based methods, and subspace reconstruction approaches. The evaluation considered both artifact removal efficiency and EEG signal preservation across different experimental conditions. Results demonstrated that the method effectively reduced ocular artifact contamination while preserving neurophysiological content. Specifically, two implementations of the method, utilizing 60-second and 90-second time windows, were analyzed, revealing that longer windows provided superior EEG signal preservation, particularly in higher frequency bands. These findings validate the effectiveness of the CFo-CLEAN method for real-time applications, making it a valuable tool for brain-computer interfaces (BCIs), neuroergonomics, and cognitive state monitoring. By avoiding the need for a calibration phase and incorporating adaptive processing, this method represents a significant advancement in real-time EEG artifact correction, facilitating its deployment in dynamic, real-world environments.
Additional Links: PMID-41335707
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41335707,
year = {2025},
author = {Ronca, V and Di Flumeri, G and Lungarini, L and Capotorto, R and Germano, D and Giorgi, A and Borghini, G and Babiloni, F and Arico, P},
title = {A Novel Multi-Stage Algorithm for Real-Time Detection and Correction of Ocular Artifacts in EEG: A Calibration-Free Approach.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-7},
doi = {10.1109/EMBC58623.2025.11254864},
pmid = {41335707},
issn = {2694-0604},
mesh = {*Electroencephalography/methods ; Humans ; *Algorithms ; *Artifacts ; *Signal Processing, Computer-Assisted ; Calibration ; Electrooculography/methods ; Male ; Adult ; Female ; },
abstract = {Ocular artifacts, particularly blinks, significantly affect the integrity of electroencephalographic (EEG) signals, posing a challenge for real-time applications. Traditional correction methods often require a calibration phase or additional electrooculogram (EOG) channels, limiting their applicability in mobile and real-world settings. This study presents a novel detection and correction method, designed for online ocular artifact correction without the need for prior calibration: the CFo-CLEAN. The proposed method integrates an Enhanced Adaptive Data-driven Algorithm (eADA) for real-time identification and correction of ocular artifacts directly from EEG signals. Unlike conventional approaches, this implementation adapts dynamically to ongoing EEG variations, enhancing flexibility and performance. The study evaluates the CFo-CLEAN method using EEG data recorded from 38 participants during real-world driving scenarios. Performance comparisons were conducted against established correction techniques, including Independent Component Analysis (ICA), regression-based methods, and subspace reconstruction approaches. The evaluation considered both artifact removal efficiency and EEG signal preservation across different experimental conditions. Results demonstrated that the method effectively reduced ocular artifact contamination while preserving neurophysiological content. Specifically, two implementations of the method, utilizing 60-second and 90-second time windows, were analyzed, revealing that longer windows provided superior EEG signal preservation, particularly in higher frequency bands. These findings validate the effectiveness of the CFo-CLEAN method for real-time applications, making it a valuable tool for brain-computer interfaces (BCIs), neuroergonomics, and cognitive state monitoring. By avoiding the need for a calibration phase and incorporating adaptive processing, this method represents a significant advancement in real-time EEG artifact correction, facilitating its deployment in dynamic, real-world environments.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Electroencephalography/methods
Humans
*Algorithms
*Artifacts
*Signal Processing, Computer-Assisted
Calibration
Electrooculography/methods
Male
Adult
Female
RevDate: 2025-12-03
CmpDate: 2025-12-03
A Two-Stage Deep Learning Approach for EEG Artifact Removal and Classification: Towards Reliable Wearable Applications.
Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference, 2025:1-5.
EEG artifact removal remains a critical challenge in neural signal processing. In this paper, we present a novel two-stage approach combining a modified IC-UNet architecture for artifact removal with a modified VGGNet for artifact type identification. The system automatically triggers the classification stage when the difference between original and denoised signals exceeds a learned threshold, enabling the classification of ocular artifacts (eye blinks and saccadic movements) in the original signals. The denoising stage employs parallel encoding paths with channel-specific feature extraction, followed by a shared bottleneck and decoder network. The system was evaluated using EEG data from subjects performing controlled eye blink and saccadic movement tasks. The denoising network achieves high correlation values between predicted and ground truth signals, particularly in temporal and specific frontal regions (T5: 0.86 ± 0.01, T6: 0.85 ± 0.01, F3: 0.83 ± 0.01). The classification network shows excellent performance, achieving 99.35% accuracy on the test set with only four misclassifications out of 620 cases.Clinical relevance- This study demonstrates the feasibility of accurate artifact removal and classification in temporal and behind-the-ear EEG recordings, which is particularly relevant for the development of wearable EEG devices for continuous monitoring and hybrid BCI systems.
Additional Links: PMID-41335679
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41335679,
year = {2025},
author = {Farabbi, A and Ballabio, F and Rossi, M and Palmisciano, AC and Antonello, N and Trojaniello, D and Ongarello, T and Cerveri, P and Mainardi, L},
title = {A Two-Stage Deep Learning Approach for EEG Artifact Removal and Classification: Towards Reliable Wearable Applications.},
journal = {Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference},
volume = {2025},
number = {},
pages = {1-5},
doi = {10.1109/EMBC58623.2025.11254976},
pmid = {41335679},
issn = {2694-0604},
mesh = {*Electroencephalography/methods/instrumentation ; Humans ; *Artifacts ; *Deep Learning ; *Wearable Electronic Devices ; *Signal Processing, Computer-Assisted ; Algorithms ; Male ; Blinking/physiology ; },
abstract = {EEG artifact removal remains a critical challenge in neural signal processing. In this paper, we present a novel two-stage approach combining a modified IC-UNet architecture for artifact removal with a modified VGGNet for artifact type identification. The system automatically triggers the classification stage when the difference between original and denoised signals exceeds a learned threshold, enabling the classification of ocular artifacts (eye blinks and saccadic movements) in the original signals. The denoising stage employs parallel encoding paths with channel-specific feature extraction, followed by a shared bottleneck and decoder network. The system was evaluated using EEG data from subjects performing controlled eye blink and saccadic movement tasks. The denoising network achieves high correlation values between predicted and ground truth signals, particularly in temporal and specific frontal regions (T5: 0.86 ± 0.01, T6: 0.85 ± 0.01, F3: 0.83 ± 0.01). The classification network shows excellent performance, achieving 99.35% accuracy on the test set with only four misclassifications out of 620 cases.Clinical relevance- This study demonstrates the feasibility of accurate artifact removal and classification in temporal and behind-the-ear EEG recordings, which is particularly relevant for the development of wearable EEG devices for continuous monitoring and hybrid BCI systems.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
*Electroencephalography/methods/instrumentation
Humans
*Artifacts
*Deep Learning
*Wearable Electronic Devices
*Signal Processing, Computer-Assisted
Algorithms
Male
Blinking/physiology
RevDate: 2025-12-03
Brain-inspired signal processing for detecting stress during mental arithmetic tasks.
Brain informatics pii:10.1186/s40708-025-00281-y [Epub ahead of print].
Brain-Computer Interfaces provide promising alternatives for detecting stress and enhancing emotional resilience. This study introduces a lightweight, subject-independent method for detecting stress during arithmetic tasks, designed for low computational cost and real-time use. Stress detection is performed through ElectroEncephaloGraphy (EEG) signal analysis using a simplified processing pipeline. The method begins with preprocessing the EEG recordings to eliminate artifacts and focus on relevant frequency bands (α, β, and γ). Features are extracted by calculating band power and its deviation from a baseline. A statistical thresholding mechanism classifies stress and no-stress epochs without the need for subject-specific calibration. The approach was validated on a publicly available dataset of 36 subjects and achieved an average accuracy of 88.89%. The method effectively identifies stress-related brainwave patterns while maintaining efficiency, making it suitable for embedded and wearable devices. Unlike many existing systems, it does not require subject-specific training, enhancing its applicability in real-world environments.
Additional Links: PMID-41335297
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41335297,
year = {2025},
author = {Belwafi, K and Alsuwaidi, A and Mejri, S and Djemal, R},
title = {Brain-inspired signal processing for detecting stress during mental arithmetic tasks.},
journal = {Brain informatics},
volume = {},
number = {},
pages = {},
doi = {10.1186/s40708-025-00281-y},
pmid = {41335297},
issn = {2198-4018},
abstract = {Brain-Computer Interfaces provide promising alternatives for detecting stress and enhancing emotional resilience. This study introduces a lightweight, subject-independent method for detecting stress during arithmetic tasks, designed for low computational cost and real-time use. Stress detection is performed through ElectroEncephaloGraphy (EEG) signal analysis using a simplified processing pipeline. The method begins with preprocessing the EEG recordings to eliminate artifacts and focus on relevant frequency bands (α, β, and γ). Features are extracted by calculating band power and its deviation from a baseline. A statistical thresholding mechanism classifies stress and no-stress epochs without the need for subject-specific calibration. The approach was validated on a publicly available dataset of 36 subjects and achieved an average accuracy of 88.89%. The method effectively identifies stress-related brainwave patterns while maintaining efficiency, making it suitable for embedded and wearable devices. Unlike many existing systems, it does not require subject-specific training, enhancing its applicability in real-world environments.},
}
RevDate: 2025-12-03
Cross-domain correlation analysis to improve SSVEP signals recognition in brain-computer interfaces.
Biomedical physics & engineering express [Epub ahead of print].
The recognition of steady-state visual evoked potential (SSVEP) signals in brain-computer interface (BCI) systems is challenging due to the lack of training data and significant inter-subject variability. To address this, we propose a novel unsupervised transfer learning framework that enhances SSVEP recognition without requiring any subject-specific calibration. Our method employs a three-stage pipeline: (1) preprocessing with similarity-aware subject selection and Euclidean alignment to mitigate domain shifts; (2) hybrid feature extraction combining canonical correlation analysis (CCA) and task-related component analysis (TRCA) to enhance signal-to-noise ratio and phase sensitivity; and (3) weighted correlation fusion for robust classification. Extensive evaluations on the Benchmark and BETA datasets demonstrate that our approach achieves state-of-the-art performance, with average accuracies of 83.20% and 69.08% at 1s data length, respectively-significantly outperforming existing methods like ttCCA and Ensemble-DNN. The highest information transfer rate reaches 157.53 bits/min, underscoring the framework's practical potential for plug-and-play SSVEP-based BCIs.
Additional Links: PMID-41335119
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41335119,
year = {2025},
author = {Hu, K and Wang, Y and Tu, K and Guo, H and Yan, J},
title = {Cross-domain correlation analysis to improve SSVEP signals recognition in brain-computer interfaces.},
journal = {Biomedical physics & engineering express},
volume = {},
number = {},
pages = {},
doi = {10.1088/2057-1976/ae2772},
pmid = {41335119},
issn = {2057-1976},
abstract = {The recognition of steady-state visual evoked potential (SSVEP) signals in brain-computer interface (BCI) systems is challenging due to the lack of training data and significant inter-subject variability. To address this, we propose a novel unsupervised transfer learning framework that enhances SSVEP recognition without requiring any subject-specific calibration. Our method employs a three-stage pipeline: (1) preprocessing with similarity-aware subject selection and Euclidean alignment to mitigate domain shifts; (2) hybrid feature extraction combining canonical correlation analysis (CCA) and task-related component analysis (TRCA) to enhance signal-to-noise ratio and phase sensitivity; and (3) weighted correlation fusion for robust classification. Extensive evaluations on the Benchmark and BETA datasets demonstrate that our approach achieves state-of-the-art performance, with average accuracies of 83.20% and 69.08% at 1s data length, respectively-significantly outperforming existing methods like ttCCA and Ensemble-DNN. The highest information transfer rate reaches 157.53 bits/min, underscoring the framework's practical potential for plug-and-play SSVEP-based BCIs.},
}
RevDate: 2025-12-03
Bayesian Causal Inference Accounts for Multisensory Filling-In at the Blind Spot.
bioRxiv : the preprint server for biology pii:2024.11.15.623713.
We asked three questions about multisensory perception across the physiological blind spot: (1) Does audiovisual integration persist without bottom-up visual input? (2) Does the brain adjust its sensory uncertainties and priors accordingly? (3) Are the underlying causal-inference computations preserved? Participants judged flashes and beeps in an audiovisual illusion presented across the blind spot or a matched control location. Responses were fit with a Bayesian Causal Inference (BCI) model, estimating sensory noise, numerosity priors, and causal-inference priors under multiple decision strategies evaluated using BIC. Illusions were robust at both locations, indicating preserved integration. Model fits showed higher visual uncertainty and broader prior expectations at the blind spot, while auditory precision and the causal prior remained stable. Thus, the computational architecture of causal inference is maintained, but its parameters flexibly adapt to local sensory reliability. These findings demonstrate that perceptual inference remains intact even in regions without retinal input, achieved by adjusting internal uncertainty rather than altering core multisensory computations.
Additional Links: PMID-41332552
Full Text:
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41332552,
year = {2025},
author = {Chan, AYC and Stiles, NRB and Levitan, CA and Wu, DA and Tanguay, AR and Shimojo, S},
title = {Bayesian Causal Inference Accounts for Multisensory Filling-In at the Blind Spot.},
journal = {bioRxiv : the preprint server for biology},
volume = {},
number = {},
pages = {},
doi = {10.1101/2024.11.15.623713},
pmid = {41332552},
issn = {2692-8205},
abstract = {We asked three questions about multisensory perception across the physiological blind spot: (1) Does audiovisual integration persist without bottom-up visual input? (2) Does the brain adjust its sensory uncertainties and priors accordingly? (3) Are the underlying causal-inference computations preserved? Participants judged flashes and beeps in an audiovisual illusion presented across the blind spot or a matched control location. Responses were fit with a Bayesian Causal Inference (BCI) model, estimating sensory noise, numerosity priors, and causal-inference priors under multiple decision strategies evaluated using BIC. Illusions were robust at both locations, indicating preserved integration. Model fits showed higher visual uncertainty and broader prior expectations at the blind spot, while auditory precision and the causal prior remained stable. Thus, the computational architecture of causal inference is maintained, but its parameters flexibly adapt to local sensory reliability. These findings demonstrate that perceptual inference remains intact even in regions without retinal input, achieved by adjusting internal uncertainty rather than altering core multisensory computations.},
}
RevDate: 2025-12-03
A link between increased temperature and avian body condition in a logged tropical forest.
Conservation biology : the journal of the Society for Conservation Biology [Epub ahead of print].
The combined effects of anthropogenic disturbances, such as logging and climate change, remain poorly understood; yet, they are the main threats to tropical biodiversity. Most tropical African countries lack long-term climate data, so climate impacts on biodiversity cannot be assessed. However, individuals experience weather, rather than climate, such that climate effects could be seen as the cumulative effects of weather over time. We used morphometric data collected in 1996-2000 and 2017-2021 on understory birds in the Budongo Forest, Uganda, to assess how logging history and short-term weather variations affected the body condition (body condition index [BCI]) of birds. Birds were captured in mist nets in logged and unlogged sites. We analyzed data with Bayesian mixed-effects models. The BCI values were lower in logged forests and decreased as maximum temperatures increased, irrespective of the sensitivity of the birds to logging. Birds responded quickly to increasing temperatures and precipitation (within 1 week), and the longer a hot period was, the worse the effect on birds in heavily logged forests, suggesting reduced thermal buffering. Contrary to our expectations, BCI values for 2017-2021 were higher than values for 1996-2000, indicating possible forest recovery. Our findings underscore the importance of short-term weather data to predict climate change impacts. Such predictions can inform tropical forest management and restoration measures.
Additional Links: PMID-41332173
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41332173,
year = {2025},
author = {Uwimbabazi, M and Muhanguzi, G and Eryenyu, D and Arua, P and Tweheyo, M and Patten, MA and Eycott, AE and Babweteera, F},
title = {A link between increased temperature and avian body condition in a logged tropical forest.},
journal = {Conservation biology : the journal of the Society for Conservation Biology},
volume = {},
number = {},
pages = {e70190},
doi = {10.1111/cobi.70190},
pmid = {41332173},
issn = {1523-1739},
support = {//Earthwatch Institute/ ; //Royal Zoological Society of Scotland/ ; },
abstract = {The combined effects of anthropogenic disturbances, such as logging and climate change, remain poorly understood; yet, they are the main threats to tropical biodiversity. Most tropical African countries lack long-term climate data, so climate impacts on biodiversity cannot be assessed. However, individuals experience weather, rather than climate, such that climate effects could be seen as the cumulative effects of weather over time. We used morphometric data collected in 1996-2000 and 2017-2021 on understory birds in the Budongo Forest, Uganda, to assess how logging history and short-term weather variations affected the body condition (body condition index [BCI]) of birds. Birds were captured in mist nets in logged and unlogged sites. We analyzed data with Bayesian mixed-effects models. The BCI values were lower in logged forests and decreased as maximum temperatures increased, irrespective of the sensitivity of the birds to logging. Birds responded quickly to increasing temperatures and precipitation (within 1 week), and the longer a hot period was, the worse the effect on birds in heavily logged forests, suggesting reduced thermal buffering. Contrary to our expectations, BCI values for 2017-2021 were higher than values for 1996-2000, indicating possible forest recovery. Our findings underscore the importance of short-term weather data to predict climate change impacts. Such predictions can inform tropical forest management and restoration measures.},
}
RevDate: 2025-12-02
Neural dissociation of attention and working memory through inhibitory control.
Nature communications pii:10.1038/s41467-025-66553-7 [Epub ahead of print].
Attention and working memory (WM) have traditionally been considered closely linked processes with shared neural mechanisms. In information selection, attention is often conceptualized as a gatekeeper to WM, regulating which information is encoded and stored. Here, combining tasks specifically designed to separate attention from WM encoding with a multimodal approach, we provide converging neural and causal evidence that these processes are dissociable. Functional MRI identifies the supramarginal gyrus (SMG) as the key region enabling this dissociation, while dynamic causal modeling reveals the neural circuitry through which the SMG exerts inhibitory control over attentional representations, regulating their integration into WM. Furthermore, neuromodulation via transcranial direct current stimulation (tDCS) demonstrates that enhancing SMG activity strengthens this inhibitory control. A second tDCS experiment using varied stimuli confirms the generalizability of the effect. Finally, a transcranial magnetic stimulation (TMS) experiment provides further causal evidence with greater spatial precision. These findings challenge the long-standing view that attention and WM encoding form a continuous process, demonstrating instead that they constitute two dissociable neural processes of information selection.
Additional Links: PMID-41330936
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41330936,
year = {2025},
author = {Liu, Y and Fu, Y and Tang, E and Wu, H and Han, J and Xie, M and Zhang, Y and Peng, B and Huang, J and Liu, H and Chen, H and Qin, P},
title = {Neural dissociation of attention and working memory through inhibitory control.},
journal = {Nature communications},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41467-025-66553-7},
pmid = {41330936},
issn = {2041-1723},
support = {32171046//National Natural Science Foundation of China (National Science Foundation of China)/ ; 32200844//National Natural Science Foundation of China (National Science Foundation of China)/ ; 32371098//National Natural Science Foundation of China (National Science Foundation of China)/ ; 31971032//National Natural Science Foundation of China (National Science Foundation of China)/ ; },
abstract = {Attention and working memory (WM) have traditionally been considered closely linked processes with shared neural mechanisms. In information selection, attention is often conceptualized as a gatekeeper to WM, regulating which information is encoded and stored. Here, combining tasks specifically designed to separate attention from WM encoding with a multimodal approach, we provide converging neural and causal evidence that these processes are dissociable. Functional MRI identifies the supramarginal gyrus (SMG) as the key region enabling this dissociation, while dynamic causal modeling reveals the neural circuitry through which the SMG exerts inhibitory control over attentional representations, regulating their integration into WM. Furthermore, neuromodulation via transcranial direct current stimulation (tDCS) demonstrates that enhancing SMG activity strengthens this inhibitory control. A second tDCS experiment using varied stimuli confirms the generalizability of the effect. Finally, a transcranial magnetic stimulation (TMS) experiment provides further causal evidence with greater spatial precision. These findings challenge the long-standing view that attention and WM encoding form a continuous process, demonstrating instead that they constitute two dissociable neural processes of information selection.},
}
RevDate: 2025-12-02
Task-specific effects of sleep deprivation on cognitive function and EEG brain network in night-shift nurses.
Brain research bulletin, 233:111661 pii:S0361-9230(25)00473-3 [Epub ahead of print].
BACKGROUND: Night-shift nurses experience chronic sleep deprivation, which impairs cognitive functions crucial for patient safety. However, the underlying reorganization of brain functional networks remains poorly understood. This study aimed to investigate the task-specific effects of sleep deprivation on brain network topology during sustained attention and working memory in night-shift female nurses.
METHODS: In a within-subjects design, electroencephalography (EEG) data from 28 female nurses were recorded during a rested session (R-Session) and a sleep-deprived session (SD-Session) immediately following a night shift. Participants performed the psychomotor vigilance test (PVT) and 2-back tasks. Functional connectivity was estimated using the weighted phase lag index (wPLI), and brain network properties were quantified using graph theoretical analysis at both global and nodal levels.
RESULTS: Our findings revealed a clear behavioral dissociation: sleep deprivation significantly impaired PVT performance but had no effect on 2-back task performance. This dissociation was mirrored by distinct patterns of neural reorganization. During the PVT, the brain network exhibited a compensatory enhancement of global topology, characterized by a significant increase in clustering coefficient, global efficiency, local efficiency, and small-worldness, alongside a decrease in characteristic path length, particularly in the theta and beta bands. In contrast, the 2-back task showed only a localized increase in the theta-band clustering coefficient. Nodal analysis further revealed a critical topographical distinction: PVT-related efficiency changes were strongly right-lateralized, whereas 2-back changes were bilaterally distributed.
CONCLUSION: In conclusion, these results demonstrate that sleep deprivation elicits task-specific neurocognitive adaptations. Sustained attention appears highly vulnerable, prompting a broad compensatory reorganization of the right-hemispheric attention network. Conversely, working memory function remains behaviorally stable, underpinned by a more specific network reorganization, primarily involving increased local connectivity. This study deepens our understanding of the neural mechanisms underlying cognitive vulnerability and resilience in nurses group.
Additional Links: PMID-41330225
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41330225,
year = {2025},
author = {Yuan, J and Xu, M and Qian, L and Gao, L and Sun, Y},
title = {Task-specific effects of sleep deprivation on cognitive function and EEG brain network in night-shift nurses.},
journal = {Brain research bulletin},
volume = {233},
number = {},
pages = {111661},
doi = {10.1016/j.brainresbull.2025.111661},
pmid = {41330225},
issn = {1873-2747},
abstract = {BACKGROUND: Night-shift nurses experience chronic sleep deprivation, which impairs cognitive functions crucial for patient safety. However, the underlying reorganization of brain functional networks remains poorly understood. This study aimed to investigate the task-specific effects of sleep deprivation on brain network topology during sustained attention and working memory in night-shift female nurses.
METHODS: In a within-subjects design, electroencephalography (EEG) data from 28 female nurses were recorded during a rested session (R-Session) and a sleep-deprived session (SD-Session) immediately following a night shift. Participants performed the psychomotor vigilance test (PVT) and 2-back tasks. Functional connectivity was estimated using the weighted phase lag index (wPLI), and brain network properties were quantified using graph theoretical analysis at both global and nodal levels.
RESULTS: Our findings revealed a clear behavioral dissociation: sleep deprivation significantly impaired PVT performance but had no effect on 2-back task performance. This dissociation was mirrored by distinct patterns of neural reorganization. During the PVT, the brain network exhibited a compensatory enhancement of global topology, characterized by a significant increase in clustering coefficient, global efficiency, local efficiency, and small-worldness, alongside a decrease in characteristic path length, particularly in the theta and beta bands. In contrast, the 2-back task showed only a localized increase in the theta-band clustering coefficient. Nodal analysis further revealed a critical topographical distinction: PVT-related efficiency changes were strongly right-lateralized, whereas 2-back changes were bilaterally distributed.
CONCLUSION: In conclusion, these results demonstrate that sleep deprivation elicits task-specific neurocognitive adaptations. Sustained attention appears highly vulnerable, prompting a broad compensatory reorganization of the right-hemispheric attention network. Conversely, working memory function remains behaviorally stable, underpinned by a more specific network reorganization, primarily involving increased local connectivity. This study deepens our understanding of the neural mechanisms underlying cognitive vulnerability and resilience in nurses group.},
}
RevDate: 2025-12-02
Lightweight deep learning models for EEG decoding: A Review.
Journal of neural engineering [Epub ahead of print].
Brain-computer interface (BCI) technology enables direct communication between the human brain and external devices by decoding electroencephalogram (EEG) signals into actionable commands. As a noninvasive and portable modality, EEG-based BCIs hold promise for applications ranging from neurorehabilitation to assistive technologies. However, their performance depends critically on the accurate extraction of relevant neural features and the reliable recognition of underlying patterns. Deep learning has transformed this process. By automatically learning complex, task-relevant representations from raw or minimally processed EEG data, deep neural networks have surpassed many traditional handcrafted feature approaches in both accuracy and adaptability. Yet, the substantial computational and memory demands of many deep learning architectures limit their deployment in portable or real-time BCI systems. This challenge has motivated a growing interest in lightweight models-architectures optimized to reduce complexity while preserving or even enhancing performance. This paper provides a systematic review of such lightweight deep learning models for EEG signal classification, with EEGNet serving as a representative baseline. To organize this landscape, existing approaches are categorized into three main strategies: (1) information integration through multi-scale feature fusion, (2) optimization of hidden layer design, and (3) hybrid strategies combining multiple structural enhancements. The review synthesizes recent advances, identifies emerging trends, and outlines potential directions for future research. These insights aim to inform the design of efficient and robust EEG classification architectures capable of meeting the practical demands of real-world BCI applications.
Additional Links: PMID-41330041
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41330041,
year = {2025},
author = {Li, Y and Chen, E and Xiao, X and Xu, M and Ming, D},
title = {Lightweight deep learning models for EEG decoding: A Review.},
journal = {Journal of neural engineering},
volume = {},
number = {},
pages = {},
doi = {10.1088/1741-2552/ae2717},
pmid = {41330041},
issn = {1741-2552},
abstract = {Brain-computer interface (BCI) technology enables direct communication between the human brain and external devices by decoding electroencephalogram (EEG) signals into actionable commands. As a noninvasive and portable modality, EEG-based BCIs hold promise for applications ranging from neurorehabilitation to assistive technologies. However, their performance depends critically on the accurate extraction of relevant neural features and the reliable recognition of underlying patterns. Deep learning has transformed this process. By automatically learning complex, task-relevant representations from raw or minimally processed EEG data, deep neural networks have surpassed many traditional handcrafted feature approaches in both accuracy and adaptability. Yet, the substantial computational and memory demands of many deep learning architectures limit their deployment in portable or real-time BCI systems. This challenge has motivated a growing interest in lightweight models-architectures optimized to reduce complexity while preserving or even enhancing performance. This paper provides a systematic review of such lightweight deep learning models for EEG signal classification, with EEGNet serving as a representative baseline. To organize this landscape, existing approaches are categorized into three main strategies: (1) information integration through multi-scale feature fusion, (2) optimization of hidden layer design, and (3) hybrid strategies combining multiple structural enhancements. The review synthesizes recent advances, identifies emerging trends, and outlines potential directions for future research. These insights aim to inform the design of efficient and robust EEG classification architectures capable of meeting the practical demands of real-world BCI applications.},
}
RevDate: 2025-12-02
Do we advise as one likes? The alignment bias in social advice giving.
PLoS computational biology, 21(12):e1013732 pii:PCOMPBIOL-D-25-00735 [Epub ahead of print].
We often give advice to influence others, but could our own advice also be shaped by the very individuals we aim to influence (i.e., advisees)? This reverse flow of social influence-from those typically seen as being influenced to those who provide the influence-has been largely neglected, limiting our understanding of the reciprocal nature of human communications. Here, we conducted a series of experiments and applied computational modelling to systematically investigate how advisees' opinions shape the advice-giving process. In an investment game, participants (n = 346, across four studies) provided advice either independently or after observing advisees' opinions (Studies 1 & 2), with feedback on their advice (acceptance or rejection) provided by advisees (Studies 3 & 4). Our findings reveal that advisors tend to adjust their advice to align with the advisees' opinions (we refer to this as the alignment bias) (Study 1). This tendency, which reflects normative conformity, persists even when advisors were directly incentivized to provide accurate advice (Study 2). As feedback is introduced, advisors' behavior shifts in ways best captured by a reinforcement learning model, suggesting that advisees' feedback drives adaptations in advice giving that maximize acceptance and minimize rejection (Study 3). This adaptation persisted even when acceptance is rare, as bolstered by the model-based evidence (Study 4). Collectively, our findings highlight advisors' susceptibility to the consequence of giving advice, which can lead to counterproductive impacts on decision-making processes and misinformation exacerbation in social encounters.
Additional Links: PMID-41329786
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41329786,
year = {2025},
author = {Luo, X and Zhang, L and Pan, Y},
title = {Do we advise as one likes? The alignment bias in social advice giving.},
journal = {PLoS computational biology},
volume = {21},
number = {12},
pages = {e1013732},
doi = {10.1371/journal.pcbi.1013732},
pmid = {41329786},
issn = {1553-7358},
abstract = {We often give advice to influence others, but could our own advice also be shaped by the very individuals we aim to influence (i.e., advisees)? This reverse flow of social influence-from those typically seen as being influenced to those who provide the influence-has been largely neglected, limiting our understanding of the reciprocal nature of human communications. Here, we conducted a series of experiments and applied computational modelling to systematically investigate how advisees' opinions shape the advice-giving process. In an investment game, participants (n = 346, across four studies) provided advice either independently or after observing advisees' opinions (Studies 1 & 2), with feedback on their advice (acceptance or rejection) provided by advisees (Studies 3 & 4). Our findings reveal that advisors tend to adjust their advice to align with the advisees' opinions (we refer to this as the alignment bias) (Study 1). This tendency, which reflects normative conformity, persists even when advisors were directly incentivized to provide accurate advice (Study 2). As feedback is introduced, advisors' behavior shifts in ways best captured by a reinforcement learning model, suggesting that advisees' feedback drives adaptations in advice giving that maximize acceptance and minimize rejection (Study 3). This adaptation persisted even when acceptance is rare, as bolstered by the model-based evidence (Study 4). Collectively, our findings highlight advisors' susceptibility to the consequence of giving advice, which can lead to counterproductive impacts on decision-making processes and misinformation exacerbation in social encounters.},
}
RevDate: 2025-12-02
The Secondary Motor Cortex-External Globus Pallidus Pathway Regulates Auditory Feedback of Volitional Control.
Neuroscience bulletin [Epub ahead of print].
Effective use of brain-computer interfaces (BCIs) requires the ability to suppress a planned action (volitional inhibition) for adaptable control in real-world scenarios, but their mechanisms are unclear. Here, we used fiber photometry to monitor external globus pallidus (GPe) and subthalamic nucleus (STN) neurons' activity in mice during a volitional stop-signal task (67% GO, 33% NO-GO). GPe/STN neurons (receiving M2 projections) responded to auditory cues, feedback, and rewards in both trials. Importantly, chemogenetic activation of the M2-GPe pathway enhanced volitional inhibition by modulating auditory feedback response, yet inhibited GPe neurons' feedback response. Furthermore, time-locked optogenetic inhibition of M2-projecting GPe neurons at auditory feedback also enhanced volitional inhibition via prolonged GO trial response times. Collectively, these findings identified the M2-GPe pathway for auditory biofeedback to improve volitional control, offering novel avenues for the advancement of neural interfaces for biofeedback and enhancement of BCI efficacy.
Additional Links: PMID-41329325
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41329325,
year = {2025},
author = {Luo, S and Fan, Y and Yu, F and Zhou, X and Hu, K and Yi, H and Zhou, H and Li, T and Chen, JF and Zhang, L},
title = {The Secondary Motor Cortex-External Globus Pallidus Pathway Regulates Auditory Feedback of Volitional Control.},
journal = {Neuroscience bulletin},
volume = {},
number = {},
pages = {},
pmid = {41329325},
issn = {1995-8218},
abstract = {Effective use of brain-computer interfaces (BCIs) requires the ability to suppress a planned action (volitional inhibition) for adaptable control in real-world scenarios, but their mechanisms are unclear. Here, we used fiber photometry to monitor external globus pallidus (GPe) and subthalamic nucleus (STN) neurons' activity in mice during a volitional stop-signal task (67% GO, 33% NO-GO). GPe/STN neurons (receiving M2 projections) responded to auditory cues, feedback, and rewards in both trials. Importantly, chemogenetic activation of the M2-GPe pathway enhanced volitional inhibition by modulating auditory feedback response, yet inhibited GPe neurons' feedback response. Furthermore, time-locked optogenetic inhibition of M2-projecting GPe neurons at auditory feedback also enhanced volitional inhibition via prolonged GO trial response times. Collectively, these findings identified the M2-GPe pathway for auditory biofeedback to improve volitional control, offering novel avenues for the advancement of neural interfaces for biofeedback and enhancement of BCI efficacy.},
}
RevDate: 2025-12-02
CmpDate: 2025-12-02
Comorbidity of undiagnosed mood symptoms with dementia risk in multi-regional multi-ethnic adults: evidence from epidemiological findings and plasma metabolites.
Epidemiology and psychiatric sciences, 34:e58 pii:S2045796025100346.
AIMS: To investigate the association of midlife and late-life undiagnosed mood symptoms, especially their comorbidity, with long-term dementia risk among multi-regional and ethnic adults.
METHODS: The prospective study used data from the UK Biobank (N = 142,670; mean follow-up 11.0 years) and three Asian studies (N = 1,610; mean follow-up 4.4 years). Undiagnosed mood symptoms (manic symptoms, depressive symptoms and comorbidity of depressive and manic symptoms) and diagnosed mood disorders (depression, mania and bipolar disorders) were classified. Plasma levels of 168 metabolites were measured. The association between undiagnosed mood symptoms and 12-year dementia (including subtypes) risk and domain-specific cognitive function was examined. The contribution of metabolites in explaining the association between symptom comorbidity and dementia risk was estimated.
RESULTS: Undiagnosed mood symptoms were prevalent (11.4% in the UK cohort and 31.2% in Asian cohorts) among 1,462 (1.0%) and 74 (19.4%) participants who developed dementia. Comorbidity of undiagnosed mood symptoms was associated with higher dementia risk (sub-distribution hazard ratios = 9.46; 95% confidence interval = 4.07-21.97), especially Alzheimer's disease, and with worse reasoning ability, poorer numeric memory and metabolic dysfunction. Glucose and total Esterified Cholesterol explained 9.1% of the association between symptom comorbidity and dementia, with most of the contribution being from glucose (6.8%).
CONCLUSIONS: Comorbidity of undiagnosed mood symptoms was associated with a higher cumulative risk of dementia in the long term. Glucose metabolism could be implicated in the development of mood disorders and dementia. The distinctive pathophysiological mechanism between psychiatric and neurodegenerative disorders warrants further exploration.
Additional Links: PMID-41328607
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41328607,
year = {2025},
author = {Zhang, H and Liao, Y and Lin, Z and Wen, H and Pang, T and Zhao, X and Zhang, W and Lou, X and Chen, C and Hu, S and Liu, Z and Xu, X},
title = {Comorbidity of undiagnosed mood symptoms with dementia risk in multi-regional multi-ethnic adults: evidence from epidemiological findings and plasma metabolites.},
journal = {Epidemiology and psychiatric sciences},
volume = {34},
number = {},
pages = {e58},
doi = {10.1017/S2045796025100346},
pmid = {41328607},
issn = {2045-7979},
mesh = {Humans ; Female ; Male ; United Kingdom/epidemiology ; *Dementia/epidemiology/ethnology/blood ; Middle Aged ; Aged ; Comorbidity ; Prospective Studies ; *Mood Disorders/epidemiology/ethnology/blood ; *Bipolar Disorder/epidemiology/ethnology ; Risk Factors ; Ethnicity/statistics & numerical data ; Prevalence ; *Depression/epidemiology/ethnology ; },
abstract = {AIMS: To investigate the association of midlife and late-life undiagnosed mood symptoms, especially their comorbidity, with long-term dementia risk among multi-regional and ethnic adults.
METHODS: The prospective study used data from the UK Biobank (N = 142,670; mean follow-up 11.0 years) and three Asian studies (N = 1,610; mean follow-up 4.4 years). Undiagnosed mood symptoms (manic symptoms, depressive symptoms and comorbidity of depressive and manic symptoms) and diagnosed mood disorders (depression, mania and bipolar disorders) were classified. Plasma levels of 168 metabolites were measured. The association between undiagnosed mood symptoms and 12-year dementia (including subtypes) risk and domain-specific cognitive function was examined. The contribution of metabolites in explaining the association between symptom comorbidity and dementia risk was estimated.
RESULTS: Undiagnosed mood symptoms were prevalent (11.4% in the UK cohort and 31.2% in Asian cohorts) among 1,462 (1.0%) and 74 (19.4%) participants who developed dementia. Comorbidity of undiagnosed mood symptoms was associated with higher dementia risk (sub-distribution hazard ratios = 9.46; 95% confidence interval = 4.07-21.97), especially Alzheimer's disease, and with worse reasoning ability, poorer numeric memory and metabolic dysfunction. Glucose and total Esterified Cholesterol explained 9.1% of the association between symptom comorbidity and dementia, with most of the contribution being from glucose (6.8%).
CONCLUSIONS: Comorbidity of undiagnosed mood symptoms was associated with a higher cumulative risk of dementia in the long term. Glucose metabolism could be implicated in the development of mood disorders and dementia. The distinctive pathophysiological mechanism between psychiatric and neurodegenerative disorders warrants further exploration.},
}
MeSH Terms:
show MeSH Terms
hide MeSH Terms
Humans
Female
Male
United Kingdom/epidemiology
*Dementia/epidemiology/ethnology/blood
Middle Aged
Aged
Comorbidity
Prospective Studies
*Mood Disorders/epidemiology/ethnology/blood
*Bipolar Disorder/epidemiology/ethnology
Risk Factors
Ethnicity/statistics & numerical data
Prevalence
*Depression/epidemiology/ethnology
RevDate: 2025-12-02
CmpDate: 2025-12-02
Paradigm Shift in Global Governance of Medical Brain-Computer Interface: Addressing Practical Challenges Through Institutional Innovation.
Risk management and healthcare policy, 18:3755-3768.
The rapid advancement of medical brain-computer interface (BCI) technology necessitates the transformation and upgrading of traditional governance paradigms urgently. China, the United States, and the European Union hold prominent positions in the global medical BCI landscape and have developed three highly representative governance models. Existing research on medical BCI primarily focuses on specific countries or regions, but it has failed to conduct a comprehensive comparison of governance frameworks across different jurisdictions from a horizontal perspective. In this study, a horizontal policy text analysis was employed to comprehensively compare the divergent approaches of China, the United States, and the European Union in regulating medical BCI, focusing on regulatory frameworks, approval procedures, neural data governance, and ethical governance. China's medical BCI governance is state-led, prioritizing safety; the United States features innovation-driven flexibility; the European Union uses an empowerment model to strictly mitigate risks. Yet these three models have inherent drawbacks. To ensure the healthy development of medical BCI, we suggest China, the United States, the European Union and other jurisdictions establish a lifecycle regulatory mechanism, introduce the regulatory sandbox, promote collaborative governance among multiple subjects, build hierarchical informed consent rules, endow users with neurorights and refine BCI ethical governance.
Additional Links: PMID-41328405
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41328405,
year = {2025},
author = {Zhu, R and Zhao, Y and Li, Y},
title = {Paradigm Shift in Global Governance of Medical Brain-Computer Interface: Addressing Practical Challenges Through Institutional Innovation.},
journal = {Risk management and healthcare policy},
volume = {18},
number = {},
pages = {3755-3768},
pmid = {41328405},
issn = {1179-1594},
abstract = {The rapid advancement of medical brain-computer interface (BCI) technology necessitates the transformation and upgrading of traditional governance paradigms urgently. China, the United States, and the European Union hold prominent positions in the global medical BCI landscape and have developed three highly representative governance models. Existing research on medical BCI primarily focuses on specific countries or regions, but it has failed to conduct a comprehensive comparison of governance frameworks across different jurisdictions from a horizontal perspective. In this study, a horizontal policy text analysis was employed to comprehensively compare the divergent approaches of China, the United States, and the European Union in regulating medical BCI, focusing on regulatory frameworks, approval procedures, neural data governance, and ethical governance. China's medical BCI governance is state-led, prioritizing safety; the United States features innovation-driven flexibility; the European Union uses an empowerment model to strictly mitigate risks. Yet these three models have inherent drawbacks. To ensure the healthy development of medical BCI, we suggest China, the United States, the European Union and other jurisdictions establish a lifecycle regulatory mechanism, introduce the regulatory sandbox, promote collaborative governance among multiple subjects, build hierarchical informed consent rules, endow users with neurorights and refine BCI ethical governance.},
}
RevDate: 2025-12-02
CmpDate: 2025-12-02
Decoding multi-joint hand movements from brain signals by learning a synergy-based neural manifold.
Patterns (New York, N.Y.), 6(11):101394.
Brain-computer interfaces have shown great potential in the reconstruction of motor functions. However, decoding complex and natural movements, such as hand movements, remains challenging. Traditional approaches primarily decode the movement of multiple joints in the hand independently, while the inherent synergies underlying these movements have not been well explored. Here, we demonstrate that complex hand movements can be decomposed into a set of motor primitives, each involving a synergy of multi-joint movements. Motor cortical neural activities recruit the motor synergies through spatiotemporal parameters to accomplish the complex motor targets. By learning a joint neural-motor representation of these motor synergies and decoding spatiotemporal parameters rather than the joint-level kinematics, significant improvement could be obtained in hand movement decoding. We propose a neural decoding framework, SynergyNet, to effectively learn the neural-motor synergies for hand movement control. The proposed approach significantly outperforms benchmark methods and provides high interpretability with the hand movement neural decoding task.
Additional Links: PMID-41328166
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41328166,
year = {2025},
author = {Sun, H and Wang, Z and Qi, Y and Wang, Y},
title = {Decoding multi-joint hand movements from brain signals by learning a synergy-based neural manifold.},
journal = {Patterns (New York, N.Y.)},
volume = {6},
number = {11},
pages = {101394},
pmid = {41328166},
issn = {2666-3899},
abstract = {Brain-computer interfaces have shown great potential in the reconstruction of motor functions. However, decoding complex and natural movements, such as hand movements, remains challenging. Traditional approaches primarily decode the movement of multiple joints in the hand independently, while the inherent synergies underlying these movements have not been well explored. Here, we demonstrate that complex hand movements can be decomposed into a set of motor primitives, each involving a synergy of multi-joint movements. Motor cortical neural activities recruit the motor synergies through spatiotemporal parameters to accomplish the complex motor targets. By learning a joint neural-motor representation of these motor synergies and decoding spatiotemporal parameters rather than the joint-level kinematics, significant improvement could be obtained in hand movement decoding. We propose a neural decoding framework, SynergyNet, to effectively learn the neural-motor synergies for hand movement control. The proposed approach significantly outperforms benchmark methods and provides high interpretability with the hand movement neural decoding task.},
}
RevDate: 2025-12-01
Autism spectrum disorder disrupts brain network connectivity maturation during childhood development.
Scientific reports pii:10.1038/s41598-025-30971-w [Epub ahead of print].
Understanding the developmental trajectory of autism spectrum disorder (ASD) remains a critical barrier for timely intervention in children. Here, we investigated the deficit brain maturation trajectory during childhood development in 35 ASD level 1 and 35 neurotypical children through an electroencephalography (EEG) approach. An empirical study of the potential EEG biomarkers was demonstrated in a comprehensive view of group difference and age-related group comparison using alpha power, peak alpha frequency and transfer entropy during resting. We found a significant disruption of directional brain network communication between regions in children with ASD compared to neurotypical children. Our results also suggested that the children with ASD had altered occipital alpha power and peak alpha frequency development. The present study revealed promising findings that underpinned the developmental disruption of autism spectrum disorder, which may provide a prevailing insight into the disease pathology mechanisms, paving the way for future intervention advancement.
Additional Links: PMID-41326740
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41326740,
year = {2025},
author = {Tiawongsuwan, L and Klomchitcharoen, S and Chumanee, W and Tangwattanasirikun, T and Saksittikorn, S and Chawaruechai, S and Jatupornpoonsub, T and Wongsawat, Y},
title = {Autism spectrum disorder disrupts brain network connectivity maturation during childhood development.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-30971-w},
pmid = {41326740},
issn = {2045-2322},
support = {B42G670043//National Higher Education Science Research and Innovation Policy Council (PMU B)/ ; },
abstract = {Understanding the developmental trajectory of autism spectrum disorder (ASD) remains a critical barrier for timely intervention in children. Here, we investigated the deficit brain maturation trajectory during childhood development in 35 ASD level 1 and 35 neurotypical children through an electroencephalography (EEG) approach. An empirical study of the potential EEG biomarkers was demonstrated in a comprehensive view of group difference and age-related group comparison using alpha power, peak alpha frequency and transfer entropy during resting. We found a significant disruption of directional brain network communication between regions in children with ASD compared to neurotypical children. Our results also suggested that the children with ASD had altered occipital alpha power and peak alpha frequency development. The present study revealed promising findings that underpinned the developmental disruption of autism spectrum disorder, which may provide a prevailing insight into the disease pathology mechanisms, paving the way for future intervention advancement.},
}
RevDate: 2025-12-01
Evaluating EEG-to-text models through noise-based performance analysis.
Scientific reports pii:10.1038/s41598-025-29587-x [Epub ahead of print].
Brain-computer interfaces (BCIs) have the potential to revolutionize communication for individuals with severe disabilities. EEG-to-text models, which translate brain signals into written language, offer a promising avenue for restoring communication abilities. Recent advancements in machine learning have improved the accuracy and speed of these models, but their true capabilities remain unclear due to limitations in evaluation methodologies. This study critically examines the performance of EEG-to-text models, focusing on their ability to learn from EEG signals rather than simply memorizing patterns. We introduce a novel methodology that compares model performance on EEG data with that on noise inputs. Our findings reveal that many EEG-to-text models perform similarly or even better on noise, suggesting that they may be memorizing patterns rather than truly learning from EEG signals. These results highlight the need for more rigorous benchmarking and evaluation practices in the field of EEG-to-text translation. By addressing the limitations of current methodologies, we can develop more reliable and trustworthy systems that truly harness the potential of brain-computer interfaces for communication.
Additional Links: PMID-41326639
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41326639,
year = {2025},
author = {Jo, H and Yang, Y and Han, J and Duan, Y and Xiong, H and Lee, WH},
title = {Evaluating EEG-to-text models through noise-based performance analysis.},
journal = {Scientific reports},
volume = {},
number = {},
pages = {},
doi = {10.1038/s41598-025-29587-x},
pmid = {41326639},
issn = {2045-2322},
support = {RS-2023-00226263//Korea Creative Content Agency/ ; RS-2024-00509257//Institute for Information and Communications Technology Promotion/ ; },
abstract = {Brain-computer interfaces (BCIs) have the potential to revolutionize communication for individuals with severe disabilities. EEG-to-text models, which translate brain signals into written language, offer a promising avenue for restoring communication abilities. Recent advancements in machine learning have improved the accuracy and speed of these models, but their true capabilities remain unclear due to limitations in evaluation methodologies. This study critically examines the performance of EEG-to-text models, focusing on their ability to learn from EEG signals rather than simply memorizing patterns. We introduce a novel methodology that compares model performance on EEG data with that on noise inputs. Our findings reveal that many EEG-to-text models perform similarly or even better on noise, suggesting that they may be memorizing patterns rather than truly learning from EEG signals. These results highlight the need for more rigorous benchmarking and evaluation practices in the field of EEG-to-text translation. By addressing the limitations of current methodologies, we can develop more reliable and trustworthy systems that truly harness the potential of brain-computer interfaces for communication.},
}
RevDate: 2025-12-02
Automated ladder rung test for evaluating motor coordination in Parkinson's disease mouse models.
Journal of neuroscience methods, 426:110642 pii:S0165-0270(25)00286-9 [Epub ahead of print].
BACKGROUND: The ladder rung walking test assesses fine motor coordination in Parkinson's disease (PD) mouse models but relies on labor-intensive, subjective manual scoring, necessitating an automated, objective system.
NEW METHOD: We developed a cost-effective automated ladder rung test system with a ladder featuring regular and irregular rung patterns, array through-beam optical sensors for foot-error detection, and an Arduino microcontroller. Custom Python software enables intuitive control, real-time visualization, dynamic sensor mapping, adjustable debounce, and CSV data export.
RESULTS: In an MPTP-induced PD mouse model, the system detected increased foot errors on irregular rungs (5.13 ± 1.04 vs. 1.78 ± 0.69 in controls, p < 0.0001) and longer traversal times (18.04 ± 2.64 s vs. 13.38 ± 1.95 s, p = 0.001), corroborated by open field and rotarod tests and a 68.7 % reduction in substantia nigra neurons.
Unlike costly camera-based systems requiring complex algorithms, our system uses simple photoelectric sensors and costs approximately 127 USD for all components, achieving 96.4 % precision and 99.3 % recall, making it accessible and user-friendly.
CONCLUSIONS: This automated system offers a reproducible, high-throughput tool for objective motor assessment in PD and neurological models, enhancing preclinical research.
Additional Links: PMID-41325805
Publisher:
PubMed:
Citation:
show bibtex listing
hide bibtex listing
@article {pmid41325805,
year = {2025},
author = {Zhang, P and Xu, W and Jiang, W and Jin, X and Lou, Y and Yang, T and Li, W and Gao, K and Gao, F and Qian, Z},
title = {Automated ladder rung test for evaluating motor coordination in Parkinson's disease mouse models.},
journal = {Journal of neuroscience methods},
volume = {426},
number = {},
pages = {110642},
doi = {10.1016/j.jneumeth.2025.110642},
pmid = {41325805},
issn = {1872-678X},
abstract = {BACKGROUND: The ladder rung walking test assesses fine motor coordination in Parkinson's disease (PD) mouse models but relies on labor-intensive, subjective manual scoring, necessitating an automated, objective system.
NEW METHOD: We developed a cost-effective automated ladder rung test system with a ladder featuring regular and irregular rung patterns, array through-beam optical sensors for foot-error detection, and an Arduino microcontroller. Custom Python software enables intuitive control, real-time visualization, dynamic sensor mapping, adjustable debounce, and CSV data export.
RESULTS: In an MPTP-induced PD mouse model, the system detected increased foot errors on irregular rungs (5.13 ± 1.04 vs. 1.78 ± 0.69 in controls, p < 0.0001) and longer traversal times (18.04 ± 2.64 s vs. 13.38 ± 1.95 s, p = 0.001), corroborated by open field and rotarod tests and a 68.7 % reduction in substantia nigra neurons.
Unlike costly camera-based systems requiring complex algorithms, our system uses simple photoelectric sensors and costs approximately 127 USD for all components, achieving 96.4 % precision and 99.3 % recall, making it accessible and user-friendly.
CONCLUSIONS: This automated system offers a reproducible, high-throughput tool for objective motor assessment in PD and neurological models, enhancing preclinical research.},
}
▼ ▼ LOAD NEXT 100 CITATIONS
RJR Experience and Expertise
Researcher
Robbins holds BS, MS, and PhD degrees in the life sciences. He served as a tenured faculty member in the Zoology and Biological Science departments at Michigan State University. He is currently exploring the intersection between genomics, microbial ecology, and biodiversity — an area that promises to transform our understanding of the biosphere.
Educator
Robbins has extensive experience in college-level education: At MSU he taught introductory biology, genetics, and population genetics. At JHU, he was an instructor for a special course on biological database design. At FHCRC, he team-taught a graduate-level course on the history of genetics. At Bellevue College he taught medical informatics.
Administrator
Robbins has been involved in science administration at both the federal and the institutional levels. At NSF he was a program officer for database activities in the life sciences, at DOE he was a program officer for information infrastructure in the human genome project. At the Fred Hutchinson Cancer Research Center, he served as a vice president for fifteen years.
Technologist
Robbins has been involved with information technology since writing his first Fortran program as a college student. At NSF he was the first program officer for database activities in the life sciences. At JHU he held an appointment in the CS department and served as director of the informatics core for the Genome Data Base. At the FHCRC he was VP for Information Technology.
Publisher
While still at Michigan State, Robbins started his first publishing venture, founding a small company that addressed the short-run publishing needs of instructors in very large undergraduate classes. For more than 20 years, Robbins has been operating The Electronic Scholarly Publishing Project, a web site dedicated to the digital publishing of critical works in science, especially classical genetics.
Speaker
Robbins is well-known for his speaking abilities and is often called upon to provide keynote or plenary addresses at international meetings. For example, in July, 2012, he gave a well-received keynote address at the Global Biodiversity Informatics Congress, sponsored by GBIF and held in Copenhagen. The slides from that talk can be seen HERE.
Facilitator
Robbins is a skilled meeting facilitator. He prefers a participatory approach, with part of the meeting involving dynamic breakout groups, created by the participants in real time: (1) individuals propose breakout groups; (2) everyone signs up for one (or more) groups; (3) the groups with the most interested parties then meet, with reports from each group presented and discussed in a subsequent plenary session.
Designer
Robbins has been engaged with photography and design since the 1960s, when he worked for a professional photography laboratory. He now prefers digital photography and tools for their precision and reproducibility. He designed his first web site more than 20 years ago and he personally designed and implemented this web site. He engages in graphic design as a hobby.
RJR Picks from Around the Web (updated 11 MAY 2018 )
Old Science
Weird Science
Treating Disease with Fecal Transplantation
Fossils of miniature humans (hobbits) discovered in Indonesia
Paleontology
Dinosaur tail, complete with feathers, found preserved in amber.
Astronomy
Mysterious fast radio burst (FRB) detected in the distant universe.
Big Data & Informatics
Big Data: Buzzword or Big Deal?
Hacking the genome: Identifying anonymized human subjects using publicly available data.