*Result*: Optimized XGBoost for Multimodal Affective State Classification Using In-Ear PPG and Behind-the-Ear EEG Signals.
*Further Information*
*Automated emotion identification via physiological data from wearable devices is a growing field, yet traditional electroencephalography (EEG) and photoplethysmography (PPG) collection methods can be uncomfortable. This research introduces a novel structure of the in-ear wearable device that captures both PPG and EEG signals to enhance user comfort for emotion recognition. Data were collected from 21 individuals experiencing four emotional states (fear, happy, calm, sad) induced by video stimuli. Following signal preprocessing, temporal and frequency domain features were extracted and selected using the ReliefF approach. Classification accuracy was assessed for PPG, EEG, and combined features, with combined features yielding superior results. An XGBoost classifier, optimized with Bayesian hyperparameter tuning, achieved 97.58% accuracy, 97.57% precision, 97.57% recall, and a 97.58% F1 score, outperforming support vector machine, decision tree, random forest, and K-Nearest Neighbor classifiers. These findings highlight the benefits of multimodal physiological sensing and optimized machine learning for reliable emotion characterization, with implications for mental health monitoring and human-computer interaction.*