A Comprehensive Review of Sign Language Translation for Inclusive Education Systems

Authors

  • Krishna Parmar Department of Computer Engineering (Software Engineering), Ipcowala Institute of Engineering & Technology, Dharmaj, Gujarat Technological University, Gujarat, India Author
  • Prof. Zeel Nakum Assistant Professor, Department of Computer Engineering, Ipcowala Institute of Engineering & Technology, Dharmaj, Gujarat Technological University, Gujarat, India Author
  • Dr. Padiya Swity Professor, Department of Computer Engineering, Ipcowala Institute of Engineering & Technology, Dharmaj, Gujarat Technological University, Gujarat, India Author

DOI:

https://doi.org/10.32628/CSEIT261219

Keywords:

Sign language translation, Inclusive education, Deep learning, Computer vision, Assistive technology

Abstract

Sign Language Translation (SLT) has emerged as a critical technological enabler for inclusive education systems, aiming to bridge communication gaps between hearing-impaired learners and mainstream educational environments. Recent advancements in computer vision, deep learning, and artificial intelligence have significantly improved the accuracy, robustness, and real-time feasibility of sign language recognition and translation systems. These systems increasingly support isolated signs, continuous sign language, multilingual alphabets, and sentence-level translation, making them suitable for classroom integration, e-learning platforms, and assistive educational tools. This review paper presents a comprehensive analysis of recent research on sign language translation, with a particular focus on methods applicable to inclusive education systems. The paper systematically examines state-of-the-art techniques, including YOLO-based detection, CNN–LSTM hybrids, graph convolutional networks, attention mechanisms, and optimization-driven feature reduction approaches. Key research findings, challenges, and limitations are critically discussed, highlighting gaps related to scalability, dataset diversity, real-time deployment, and educational usability. By synthesizing current trends and insights, this review aims to guide researchers and educators toward the development of more effective, accessible, and learner-centered sign language translation systems for inclusive education.

Downloads

Download data is not yet available.

References

X. Li, C. Jettanasen, and P. Chiradeja, “Exploration of Sign Language Recognition Methods Based on Improved YOLOv5s,” Computation, vol. 13, no. 3, 2025, doi: 10.3390/computation13030059. DOI: https://doi.org/10.3390/computation13030059

J. Wang, I. Ivrissimtzis, Z. Li, and L. Shi, “Hand gesture recognition for user-defined textual inputs and gestures,” Universal Access in the Information Society, vol. 24, no. 2, pp. 1315–1329, Jun. 2025, doi: 10.1007/s10209-024-01139-6. DOI: https://doi.org/10.1007/s10209-024-01139-6

Y. Han, Y. Han, and Q. Jiang, “A Study on the STGCN-LSTM Sign Language Recognition Model Based on Phonological Features of Sign Language,” IEEE Access, vol. 13, no. May, pp. 74807–74816, 2025, doi: 10.1109/ACCESS.2025.3560779. DOI: https://doi.org/10.1109/ACCESS.2025.3560779

R. Goel, S. Bansal, and K. Gupta, “Improved feature reduction framework for sign language recognition using autoencoders and adaptive Grey Wolf Optimization,” Scientific Reports, vol. 15, no. 1, pp. 1–16, 2025, doi: 10.1038/s41598-024-82785-x. DOI: https://doi.org/10.1038/s41598-024-82785-x

N. Navin et al., “Bilingual Sign Language Recognition: A YOLOv11-Based Model for Bangla and English Alphabets,” Journal of Imaging, vol. 11, no. 5, pp. 1–22, 2025, doi: 10.3390/jimaging11050134. DOI: https://doi.org/10.3390/jimaging11050134

P. Chiradeja, Y. Liang, and C. Jettanasen, “Sign Language Sentence Recognition Using Hybrid Graph Embedding and Adaptive Convolutional Networks,” Applied Sciences (Switzerland), vol. 15, no. 6, 2025, doi: 10.3390/app15062957. DOI: https://doi.org/10.3390/app15062957

Z. Wang, D. Li, R. Jiang, and M. Okumura, “Continuous Sign Language Recognition With Multi-Scale Spatial-Temporal Feature Enhancement,” IEEE Access, vol. 13, no. January, pp. 5491–5506, 2025, doi: 10.1109/ACCESS.2025.3526330. DOI: https://doi.org/10.1109/ACCESS.2025.3526330

N. Fox, B. Woll, and K. Cormier, “Best practices for sign language technology research,” Universal Access in the Information Society, vol. 24, no. 1, pp. 69–77, Mar. 2025, doi: 10.1007/s10209-023-01039-1. DOI: https://doi.org/10.1007/s10209-023-01039-1

F. M. Najib, “Sign language interpretation using machine learning and artificial intelligence,” Neural Computing and Applications, vol. 37, no. 2, pp. 841–857, 2025, doi: 10.1007/s00521-024-10395-9. DOI: https://doi.org/10.1007/s00521-024-10395-9

A. Khan et al., “Deep Learning Approaches for Continuous Sign Language Recognition: A Comprehensive Review,” IEEE Access, vol. 13, no. March, pp. 55524–55544, 2025, doi: 10.1109/ACCESS.2025.3554046. DOI: https://doi.org/10.1109/ACCESS.2025.3554046

M. Alaftekin, I. Pacal, and K. Cicek, “Real-time sign language recognition based on YOLO algorithm,” Neural Computing and Applications, vol. 36, no. 14, pp. 7609–7624, 2024, doi: 10.1007/s00521-024-09503-6. DOI: https://doi.org/10.1007/s00521-024-09503-6

D. R. Kothadiya, C. M. Bhatt, H. Kharwa, and F. Albu, “Hybrid InceptionNet Based Enhanced Architecture for Isolated Sign Language Recognition,” IEEE Access, vol. 12, no. June, pp. 90889–90899, 2024, doi: 10.1109/ACCESS.2024.3420776. DOI: https://doi.org/10.1109/ACCESS.2024.3420776

Madyanto, Rudi Kurniawan, and Yudhistira Arie Wijaya, “YOLOv8 Algorithm to Improve the Sign Language Letter Detection System Model,” Journal of Artificial Intelligence and Engineering Applications (JAIEA), vol. 4, no. 2, pp. 1379–1385, Feb. 2025, doi: 10.59934/jaiea.v4i2.912. DOI: https://doi.org/10.59934/jaiea.v4i2.912

T. H. Noor et al., “Real-Time Arabic Sign Language Recognition Using a Hybrid Deep Learning Model,” Sensors, vol. 24, no. 11, p. 3683, Jun. 2024, doi: 10.3390/s24113683. DOI: https://doi.org/10.3390/s24113683

D. Kumari and R. S. Anand, “Isolated Video-Based Sign Language Recognition Using a Hybrid CNN-LSTM Framework Based on Attention Mechanism,” Electronics, vol. 13, no. 7, p. 1229, Mar. 2024, doi: 10.3390/electronics13071229. DOI: https://doi.org/10.3390/electronics13071229

S. T. Abd Al-Latief, S. Yussof, A. Ahmad, S. M. Khadim, and R. A. Abdulhasan, “Instant Sign Language Recognition by WAR Strategy Algorithm Based Tuned Machine Learning,” International Journal of Networked and Distributed Computing, vol. 12, no. 2, pp. 344–361, 2024, doi: 10.1007/s44227-024-00039-8. DOI: https://doi.org/10.1007/s44227-024-00039-8

A. S. M. Miah, M. A. M. Hasan, S. Nishimura, and J. Shin, “Sign Language Recognition Using Graph and General Deep Neural Network Based on Large Scale Dataset,” IEEE Access, vol. 12, no. March, pp. 34553–34569, 2024, doi: 10.1109/ACCESS.2024.3372425. DOI: https://doi.org/10.1109/ACCESS.2024.3372425

A. Baihan, A. I. Alutaibi, M. Alshehri, and S. K. Sharma, “Sign language recognition using modified deep learning network and hybrid optimization: a hybrid optimizer (HO) based optimized CNNSa-LSTM approach,” Scientific Reports, vol. 14, no. 1, 2024, doi: 10.1038/s41598-024-76174-7. DOI: https://doi.org/10.1038/s41598-024-76174-7

J. Zhang, X. Bu, Y. Wang, H. Dong, Y. Zhang, and H. Wu, “Sign language recognition based on dual-path background erasure convolutional neural network,” Scientific Reports, vol. 14, no. 1, pp. 1–12, 2024, doi: 10.1038/s41598-024-62008-z. DOI: https://doi.org/10.1038/s41598-024-62008-z

Tanvi Dongare, Gaurika Nawani, Aditya Deshpande , Ayaan Shaikh, Dr. Deepali Javale, "ISL Fingerspelling Image Dataset", IEEE Dataport, July 28, 2025, doi:10.21227/796w-a432

Downloads

Published

15-01-2026

Issue

Section

Research Articles

How to Cite

[1]
Krishna Parmar, Prof. Zeel Nakum, and Dr. Padiya Swity, “A Comprehensive Review of Sign Language Translation for Inclusive Education Systems”, Int. J. Sci. Res. Comput. Sci. Eng. Inf. Technol, vol. 12, no. 1, pp. 147–152, Jan. 2026, doi: 10.32628/CSEIT261219.