Shows 94.3% Accuracy: CSAE Boosts Movement Classification from sEMG Signals

Movement classification from surface electromyography (sEMG) signals remains a significant challenge in myoelectric prosthesis control, often limited by individual differences and complex sensor setups. Blagoj Hristov, Zoran Hadzi-Velkov, and Katerina Hadzi-Velkova Saneva, all from University “Ss. Cyril and Methodius”, alongside Gorjan Nadzinski and Vesna Ojleska Latkoska et al., present a novel deep learning framework utilising Convolutional Sparse Autoencoders (CSAEs) to achieve robust gesture recognition with only two sEMG channels. Their research is significant because it bypasses the need for manual feature engineering and demonstrates exceptional multi-subject accuracy, reaching a 94.3% F1-score on a six-class gesture set. Crucially, a transfer protocol dramatically improves performance on new users, and an incremental learning strategy allows for easy expansion to more complex movement sets, paving the way for more affordable and adaptable prosthetic limbs.

The research team achieved a multi-subject F1-score of 94.3% ±0.3% on a 6-class gesture set, demonstrating the potential for robust and reliable prosthetic control with minimal sensor input.

This breakthrough relies on a Convolutional Sparse Autoencoder (CSAE) which extracts temporal features directly from raw sEMG signals, bypassing the need for complex, manually engineered feature selection. The study introduces a novel transfer learning protocol to overcome challenges posed by inter-subject variability, a common obstacle in myoelectric systems.
Performance on unseen subjects improved dramatically from a baseline of 35.1% ±3.1% to 92.3% ±0.9% using minimal calibration data, signifying a significant advancement in adaptability and personalised control. This few-shot learning approach allows the system to quickly adapt to new users, reducing the time and effort required for individual calibration.

Furthermore, the system exhibits functional extensibility through an incremental learning strategy, expanding to a 10-class gesture set with a 90.0% ±0.2% F1-score without complete model retraining. By leveraging a convolutional architecture and LASSO regularization, the CSAE learns a sparse and robust representation of neuromuscular activity, enabling efficient feature extraction and improved generalisation.

This approach avoids the computational burden of recurrent neural networks, making it suitable for real-time applications and resource-constrained hardware. The research establishes a scalable and efficient pathway towards affordable and adaptive prosthetic systems, combining high precision with minimal computational and sensor requirements.

By learning directly from raw signals, the framework promises to unlock more intuitive and performant control, potentially transforming the lives of individuals relying on prosthetic limbs. This work opens new avenues for developing next-generation prosthetics that are both accessible and responsive to the user’s needs.

Convolutional Sparse Autoencoder training and few-shot transfer learning for gesture decoding show promising results

Scientists developed a deep learning framework for accurate gesture recognition utilising only two surface electromyography (sEMG) channels to address limitations in myoelectric prosthesis control. The research team engineered a Convolutional Sparse Autoencoder (CSAE) to directly extract temporal feature representations from raw sEMG signals, circumventing the need for manual feature engineering.

This innovative approach allows the system to learn directly from the signal, potentially uncovering more robust and expressive features for improved control. Experiments employed a 6-class gesture set to evaluate the model’s performance, achieving a multi-subject F1-score of 94.3% ±0.3%. To mitigate the impact of inter-subject variability, the study pioneered a few-shot transfer learning protocol, improving performance on unseen subjects from a baseline of 35.1% ±3.1% to 92.3% ±0.9% using minimal calibration data.

This transfer protocol enables rapid adaptation to new users, reducing the time and effort required for personalised calibration. The system further demonstrates functional extensibility through an incremental learning strategy, expanding to a 10-class gesture set while maintaining a 90.0% ±0.2% F1-score without complete model retraining.

Researchers harnessed this technique to allow the system to learn new gestures without forgetting previously learned ones, enhancing its adaptability and long-term usability. The CSAE architecture was specifically chosen for its ability to preserve temporal structure within the sEMG signals, unlike standard Fully-Connected Autoencoders which often require complex preprocessing. This method achieves high performance by processing the raw time-series data directly, avoiding information loss associated with transformations into spectrograms or wavelet representations.

Deep learning and transfer learning enhance multi-class myoelectric prosthesis control by improving accuracy and robustness

Scientists achieved a multi-subject F1-score of 94.3% ±0.3% on a 6-class gesture set using a deep learning framework for myoelectric prosthesis control. The research team developed a Convolutional Sparse Autoencoder (CSAE) to extract temporal features directly from raw surface electromyography (sEMG) signals, bypassing the need for manual feature engineering.

Experiments revealed that the CSAE effectively learns a compact and meaningful representation of the sEMG signal, crucial for real-time multi-movement prosthesis control. To address inter-subject variability, the team presented a transfer learning protocol that improved performance on unseen subjects from a baseline of 35.1% ±3.1% to 92.3% ±0.9% with minimal calibration data.

This adaptation process involved a leave-one-subject-out strategy, dividing data into training, validation, and test sets across eight subjects. Data standardization, performed prior to model training, ensured no information leakage between sets and mimicked realistic prosthesis control conditions. The system demonstrated functional extensibility through an incremental strategy, achieving a 90.0% ±0.2% F1-score on a 10-class gesture set without complete model retraining.

The CSAE architecture, comprised of a symmetrical encoder-decoder pair, processes sEMG input signals represented as a matrix X∈RT×C, where T= 1000 time samples are collected over 250ms at a 4kHz sampling rate, using only two channels (C= 2). The encoder compresses the signal into a latent representation Z, aiming for information preservation, unsupervised learnability, and sparsity.

Researchers utilized strided convolutions for learnable downsampling within the encoder, preserving temporal structure and improving computational efficiency. The objective function encourages sparsity by applying an L1 penalty to activations in the bottleneck layer, promoting a disentangled and efficient representation of the input data. Measurements confirm the model’s ability to learn a robust feature representation from just two sEMG channels, paving the way for affordable and adaptive prosthetic systems.

Two-channel sEMG control achieves high accuracy and adaptability via deep learning, offering intuitive prosthetic control

Scientists have developed a deep learning framework for accurate gesture recognition using only two surface electromyography (sEMG) channels, addressing a key limitation in myoelectric prosthesis control. The proposed method employs a Convolutional Sparse Autoencoder (CSAE) to directly extract temporal features from raw signals, removing the need for manual feature engineering and simplifying the process.

This framework achieved a multi-subject F1-score of 94.3% on a six-class gesture set, demonstrating high precision with minimal sensor requirements. Researchers also introduced a transfer protocol that significantly improved performance on new users, increasing accuracy from 35.1% to 92.3% with limited calibration data.

The system’s functional extensibility was confirmed through an incremental learning strategy, successfully expanding to a ten-class gesture set with a 90.0% F1-score without complete model retraining. These findings challenge the current industry trend of relying on high-density sensor arrays, suggesting a viable path towards more affordable and adaptive prosthetic systems.

The authors acknowledge that the study was initially conducted with able-bodied subjects, and further validation with a larger and more diverse population, including amputees, is necessary to confirm clinical relevance. Future research will focus on improving the system’s robustness against real-world challenges like muscle fatigue and electrode shift, factors known to impact performance.

Additionally, extending the system’s capabilities to enable continuous, proportional control represents a significant step towards creating more natural and intuitive neuroprosthetic devices. The publicly available dataset used in the study facilitates further investigation and development in this field.

👉 More information
🗞 Leveraging Convolutional Sparse Autoencoders for Robust Movement Classification from Low-Density sEMG
🧠 ArXiv: https://arxiv.org/abs/2601.23011

Rohail T.

Rohail T.

As a quantum scientist exploring the frontiers of physics and technology. My work focuses on uncovering how quantum mechanics, computing, and emerging technologies are transforming our understanding of reality. I share research-driven insights that make complex ideas in quantum science clear, engaging, and relevant to the modern world.

Latest Posts by Rohail T.:

Explainerpfn Shows Zero-Shot Feature Importance Estimations Using Tabular Foundation Models

Explainerpfn Shows Zero-Shot Feature Importance Estimations Using Tabular Foundation

February 4, 2026
Shows TopoLS Cuts Lattice Surgery Volume by 33% with Topological Transformations

Shows TopoLS Cuts Lattice Surgery Volume by 33% with Topological Transformations

February 4, 2026
Reveals Hierarchical Approximations to Geometric Measure of Entanglement in Multiparticle Systems

Reveals Hierarchical Approximations to Geometric Measure of Entanglement in Multiparticle Systems

February 4, 2026