Emotion classification

Name: emotion
Category: mental state classification
Dataset: Chen2023 (FACED)
Objective: Multiclass classification
Split: Leave-subjects-out

Usage

neuralbench eeg emotion
Show config.yaml
# Copyright (c) Meta Platforms, Inc. and affiliates.
# All rights reserved.
#
# This source code is licensed under the license found in the
# LICENSE file in the root directory of this source tree.

data:
  study:
    source:
      name: Chen2023Large
    keep_last_30s_of_each_clip:
      name: OffsetEvents
      query: "type == 'Stimulus'"
      start_offset_from_end: -30.0
      end_offset: 0.0
    split:
      name: SklearnSplit
      split_by: subject
      valid_split_ratio: 0.2
      test_split_ratio: 0.2
      valid_random_state: 33
      test_random_state: 33
  target:
    =replace=: true
    name: LabelEncoder
    event_types: Stimulus
    event_field: description
    return_one_hot: true
    aggregation: trigger
  trigger_event_type: Stimulus
  start: 0.0
  duration: 5.0
  stride: 5.0
  summary_columns: [description, video_title, valence]
compute_class_weights: true
brain_model_output_size: &brain_model_output_size 9
trainer_config.monitor: val/bal_acc
trainer_config.mode: max
loss:
  name: CrossEntropyLoss
  kwargs:
    label_smoothing: 0.1
metrics: !!python/object/apply:neuralbench.defaults.metrics.get_classification_metric_configs
  - *brain_model_output_size

Description

This task involves classifying emotional states based on EEG signals. The dataset used for this task is the FACED dataset, which contains EEG recordings from subjects exposed to videos chosen to elicit specific emotions [Chen2023]. The goal is to categorize the EEG segments into one of 9 different emotional states: anger, fear, disgust, sadness, neutral, amusement, tenderness, inspiration, and joy.

Dataset Notes

  • As in [Ye2024], [Wang2025] and [Ding2025], we perform a leave-subjects-out split of the subjects into train, validation, and test splits. Since each subject saw the same video stimuli, this means the models can learn to recognize specific video stimuli from the EEG data, rather than actually predict the corresponding emotion label.

References

[Chen2023]

Chen, Jingjing, et al. “A large finer-grained affective computing EEG dataset.” Scientific Data 10.1 (2023): 740.

[Wang2025]

Wang, Jiquan, et al. “Cbramod: A criss-cross brain foundation model for eeg decoding.” arXiv preprint arXiv:2412.07236 (2024).

[Ye2024]

Ye, Weishan, et al. “Semi-supervised dual-stream self-attentive adversarial graph contrastive learning for cross-subject eeg-based emotion recognition.” IEEE Transactions on Affective Computing (2024).

[Ding2025]

Ding, Yi, et al. “EmT: A novel transformer for generalized cross-subject EEG emotion recognition.” IEEE Transactions on Neural Networks and Learning Systems (2025).