*Result*: Optimized XGBoost for Multimodal Affective State Classification Using In-Ear PPG and Behind-the-Ear EEG Signals.

Title:
Optimized XGBoost for Multimodal Affective State Classification Using In-Ear PPG and Behind-the-Ear EEG Signals.
Authors:
Source:
IEEE journal of biomedical and health informatics [IEEE J Biomed Health Inform] 2026 Mar; Vol. 30 (3), pp. 2139-2152.
Publication Type:
Journal Article
Language:
English
Journal Info:
Publisher: Institute of Electrical and Electronics Engineers Country of Publication: United States NLM ID: 101604520 Publication Model: Print Cited Medium: Internet ISSN: 2168-2208 (Electronic) Linking ISSN: 21682194 NLM ISO Abbreviation: IEEE J Biomed Health Inform Subsets: MEDLINE
Imprint Name(s):
Original Publication: New York, NY : Institute of Electrical and Electronics Engineers, 2013-
Entry Date(s):
Date Created: 20250813 Date Completed: 20260307 Latest Revision: 20260309
Update Code:
20260309
DOI:
10.1109/JBHI.2025.3598354
PMID:
40802630
Database:
MEDLINE

*Further Information*

*Automated emotion identification via physiological data from wearable devices is a growing field, yet traditional electroencephalography (EEG) and photoplethysmography (PPG) collection methods can be uncomfortable. This research introduces a novel structure of the in-ear wearable device that captures both PPG and EEG signals to enhance user comfort for emotion recognition. Data were collected from 21 individuals experiencing four emotional states (fear, happy, calm, sad) induced by video stimuli. Following signal preprocessing, temporal and frequency domain features were extracted and selected using the ReliefF approach. Classification accuracy was assessed for PPG, EEG, and combined features, with combined features yielding superior results. An XGBoost classifier, optimized with Bayesian hyperparameter tuning, achieved 97.58% accuracy, 97.57% precision, 97.57% recall, and a 97.58% F1 score, outperforming support vector machine, decision tree, random forest, and K-Nearest Neighbor classifiers. These findings highlight the benefits of multimodal physiological sensing and optimized machine learning for reliable emotion characterization, with implications for mental health monitoring and human-computer interaction.*