Advancements in Indian Sign Language Recognition Systems: Enhancing Communication and Accessibility for the Deaf and Hearing Impaired

Authors

  • Arashta Hussain Student, Power Electronics and Instrumentation, Jorhat Institute of Science and Technology, Assam, India
  • Nimakhi Saikia Student, Power Electronics and Instrumentation, Jorhat Institute of Science and Technology, Assam, India
  • Chandana Dev Assistant Professor, Power Electronics and Instrumentation, Jorhat Institute of Science and Technology, Assam, India

DOI:

https://doi.org/10.51983/ajes-2023.12.2.4132

Keywords:

Indian Sign Language, Recognition, Dataset, Techniques

Abstract

Sign language is an ocular-mobile language utilized by deaf or hearing-impaired individuals. It conveys a combination of handshapes, motions, facial expressions, and body postures. Sign language is an essential form of communication with its unique grammar and syntax. The communication barrier between those who use sign language and those who do not is substantial. India has approximately 63 million people in the deaf or hearing-impaired community (DHH). Therefore, research within the realm of sign language recognition (SLR) shows great potential in enhancing the quality of life for individuals facing hearing disabilities and promoting better communication and integration into society. In recent years, sign language recognition has attracted considerable attention due to its significant role in human-machine interaction, accessibility, real-time interpretation, educational tools, and communication aids. This study reviews the most current developments in Indian Sign Language Recognition Systems (ISLRS). It discusses the commonly used algorithms, standard datasets, and performance characteristics of these systems in detail. Lastly, it highlights the challenges and future perspectives of these emerging technologies.

References

K. Nimisha and A. Jacob, “A Brief Review of the Recent Trends in Sign Language Recognition,” in 2020 International Conference on Communication and Signal Processing (ICCSP), Chennai, India: IEEE, Jul. 2020, pp. 186-190, doi: 10.1109/ICCSP48568.2020.91 82351.

K. Shenoy, T. Dastane, V. Rao, and D. Vyavaharkar, “Real-time Indian Sign Language (ISL) Recognition,” in 2018 9th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Bangalore: IEEE, Jul. 2018, pp. 1-9, doi: 10.1109/ICCCNT.2018.8493808.

R. R. Verma, A. Konkimalla, A. Thakar, K. Sikka, A. C. Singh, and T. Khanna, “Prevalence of hearing loss in India,” NMJI, vol. 34, pp. 216-222, Jan. 2022, doi: 10.25259/NMJI_66_21.

“With 260 new terms ‘Indian Sign language’ enables communication on banking, bonds and trade,” The Times of India, Sep. 24, 2023. Accessed: Feb. 18, 2024. [Online]. Available: https://timesof india.indiatimes.com/india/with-260-new-terms-indian-sign-languag e-enables-communication-on-banking-bonds-and-trade/articleshow/103895684.cms

“Indian Sign Language Research and Training Center (ISLRTC), Government of India”. Accessed: Feb. 18, 2024. [Online]. Available: https://islrtc.nic.in/

N. M. Kakoty and M. D. Sharma, “Recognition of Sign Language Alphabets and Numbers based on Hand Kinematics using A Data Glove,” Procedia Computer Science, vol. 133, pp. 55-62, 2018, doi: 10.1016/j.procs.2018.07.008.

F. Keskin, F. Kıraç, Y. E. Kara, and L. Akarun, “Real Time Hand Pose Estimation Using Depth Sensors,” in Consumer Depth Cameras for Computer Vision: Research Topics and Applications, A. Fossati, J. Gall, H. Grabner, X. Ren, and K. Konolige, Eds., in Advances in Computer Vision and Pattern Recognition., London: Springer, 2013, pp. 119-137, doi: 10.1007/978-1-4471-4640-7_7.

K. Sahoo, “Indian Sign Language Recognition Using Machine Learning Techniques,” Macromolecular Symposia, vol. 397, no. 1, pp. 2000241, Jun. 2021, doi: 10.1002/masy.202000241.

J. Bora, S. Dehingia, A. Boruah, A. A. Chetia, and D. Gogoi, “Real-time Assamese Sign Language Recognition using MediaPipe and Deep Learning,” Procedia Computer Science, vol. 218, pp. 1384-1393, 2023, doi: 10.1016/j.procs.2023.01.117.

Z. Ren, J. Yuan, and Z. Zhang, “Robust hand gesture recognition based on finger-earth mover’s distance with a commodity depth camera,” in Proceedings of the 19th ACM international conference on Multimedia, Scottsdale Arizona USA: ACM, Nov. 2011, pp. 1093-1096, doi: 10.1145/2072298.2071946.

Sundar and T. Bagyammal, “American Sign Language Recognition for Alphabets Using MediaPipe and LSTM,” Procedia Computer Science, vol. 215, pp. 642-651, 2022, doi: 10.1016/j.procs.2022.12. 066.

K. Li, J. Cheng, Q. Zhang, and J. Liu, “Hand Gesture Tracking and Recognition based Human-Computer Interaction System and Its Applications,” in 2018 IEEE International Conference on Information and Automation (ICIA), Wuyishan, China: IEEE, Aug. 2018, pp. 667-672, doi: 10.1109/ICInfA.2018.8812508.

S. Das, S. Gawde, K. Suratwala, and D. Kalbande, “Sign Language Recognition Using Deep Learning on Custom Processed Static Gesture Images,” in 2018 International Conference on Smart City and Emerging Technology (ICSCET), Mumbai: IEEE, Jan. 2018, pp. 1-6, doi: 10.1109/ICSCET.2018.8537248.

Kamble, “Conversion of Sign Language to Text,” IJRASET, vol. 11, no. 5, pp. 1963-1968, May 2023, doi: 10.22214/ijraset.2023.51981.

U. Bharathi, G. Ragavi, and K. Karthika, “Signtalk: Sign Language to Text and Speech Conversion,” in 2021 International Conference on Advancements in Electrical, Electronics, Communication, Computing and Automation (ICAECA), Coimbatore, India: IEEE, Oct. 2021, pp. 1-4, doi: 10.1109/ICAECA52838.2021.9675751.

T. Kemkar, V. Rai, and B. Verma, “Sign Language to Text Conversion using Hand Gesture Recognition,” in 2023 8th International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India: IEEE, Jun. 2023, pp. 1580-1587, doi: 10.1109/ICCES57224.2023.10192820.

Bhat, V. Yadav, V. Dargan, and Yash, “Sign Language to Text Conversion using Deep Learning,” in 2022 3rd International Conference for Emerging Technology (INCET), Belgaum, India: IEEE, May 2022, pp. 1-7, doi: 10.1109/INCET54531.2022.9824885.

V. Adewale and A. Olamiti, “Conversion of Sign Language To Text And Speech Using Machine Learning Techniques,” JRRS, vol. 5, no. 1, Dec. 2018, doi: 10.36108/jrrslasu/8102/50 (0170).

M. M. Chandra, S. Rajkumar, and L. S. Kumar, “Sign Languages to Speech Conversion Prototype using the SVM Classifier,” in TENCON 2019 - 2019 IEEE Region 10 Conference (TENCON), Kochi, India: IEEE, Oct. 2019, pp. 1803-1807, doi: 10.1109/TENCO N.2019.8929356.

R. A. -, A. N. S. -, M. S. -, and K. A. S. -, “An Efficient Approach for Interpretation of Indian Sign Language using Machine Learning,” IJIRMPS, vol. 11, no. 1, pp. 230316, Jan. 2023, doi: 10.37082/IJI RMPS.v11.i1.230316.

Sridhar, R. G. Ganesan, P. Kumar, and M. Khapra, “INCLUDE: A Large Scale Dataset for Indian Sign Language Recognition,” in Proceedings of the 28th ACM International Conference on Multimedia, Seattle WA USA: ACM, Oct. 2020, pp. 1366-1375, doi: 10.1145/3394171.3413528.

W. W. Kong and S. Ranganath, “Towards subject independent continuous sign language recognition: A segment and merge approach,” Pattern Recognition, vol. 47, no. 3, pp. 1294-1308, Mar. 2014, doi: 10.1016/j.patcog.2013.09.014.

H.-D. Yang, “Sign Language Recognition with the Kinect Sensor Based on Conditional Random Fields,” Sensors, vol. 15, no. 1, pp. 135-147, Dec. 2014, doi: 10.3390/s150100135.

“Data Gloves for Sign Language Recognition System.” Accessed: Feb. 18, 2024. [Online]. Available: https://www.ijcaonline.org/pro ceedings/ncetact2015/number1/20979-2009/

P. Kumar, H. Gauba, P. P. Roy, and D. P. Dogra, “Coupled HMM-based multi-sensor data fusion for sign language recognition,” Pattern Recognition Letters, vol. 86, pp. 1-8, Jan. 2017, doi: 10.1016/j.patrec.2016.12.004.

L. Li, S. Jiang, P. B. Shull, and G. Gu, “SkinGest: artificial skin for gesture recognition via filmy stretchable strain sensors,” Advanced Robotics, vol. 32, no. 21, pp. 1112-1121, Nov. 2018, doi: 10.1080/01 691864.2018.1490666.

S. Kim, J. Kim, S. Ahn, and Y. Kim, “Finger language recognition based on ensemble artificial neural network learning using armband EMG sensors,” THC, vol. 26, pp. 249-258, May 2018, doi: 10.3233/ THC-174602.

R. Gupta and A. Kumar, “Indian sign language recognition using wearable sensors and multi-label classification,” Computers & Electrical Engineering, vol. 90, pp. 106898, Mar. 2021, doi: 10.1016/j.compeleceng.2020.106898.

F. S. Botros, A. Phinyomark, and E. J. Scheme, “Electromyography-Based Gesture Recognition: Is It Time to Change Focus From the Forearm to the Wrist?,” IEEE Trans. Ind. Inf., vol. 18, no. 1, pp. 174-184, Jan. 2022, doi: 10.1109/TII.2020.3041618.

R. Wu, S. Seo, L. Ma, J. Bae, and T. Kim, “Full-Fiber Auxetic-Interlaced Yarn Sensor for Sign-Language Translation Glove Assisted by Artificial Neural Network,” Nano-Micro Lett., vol. 14, no. 1, p. 139, Dec. 2022, doi: 10.1007/s40820-022-00887-5.

Infantino, R. Rizzo, and S. Gaglio, “A Framework for Sign Language Sentence Recognition by Commonsense Context,” IEEE Trans. Syst., Man, Cybern. C, vol. 37, no. 5, pp. 1034-1039, Sep. 2007, doi: 10.1109/TSMCC.2007.900624.

E.-J. Ong, H. Cooper, N. Pugeault, and R. Bowden, “Sign Language Recognition using Sequential Pattern Trees,” in 2012 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2012, pp. 2200-2207, doi: 10.1109/CVPR.2012.6247928.

Sabyrov, M. Mukushev, and V. Kimmelman, “Towards Real-time Sign Language Interpreting Robot: Evaluation of Non-manual Components on Recognition Accuracy,” 2019, pp. 75-82. Accessed: Feb. 18, 2024. [Online]. Available: https://openaccess.thecvf.com/ content_CVPRW_2019/html/Augmented_Human_Humancentric_Understanding_and_2D3D_Synthesis/Sabyrov_Towards_Real-time_Sig n_Language_Interpreting_Robot_Evaluation_of_Non-manual_Comp onents_CVPRW_2019_paper.html

H. Brock, I. Farag, and K. Nakadai, “Recognition of Non-Manual Content in Continuous Japanese Sign Language,” Sensors, vol. 20, no. 19, p. 5621, Oct. 2020, doi: 10.3390/s20195621.

V. T. Hoang, “HGM-4: A new multi-cameras dataset for hand gesture recognition,” Data in Brief, vol. 30, pp. 105676, Jun. 2020, doi: 10.1016/j.dib.2020.105676.

S. Aly and W. Aly, “DeepArSLR: A Novel Signer-Independent Deep Learning Framework for Isolated Arabic Sign Language Gestures Recognition,” IEEE Access, vol. 8, pp. 83199-83212, 2020, doi: 10.1109/ACCESS.2020.2990699.

O. M. Sincan and H. Y. Keles, “AUTSL: A Large Scale Multi-Modal Turkish Sign Language Dataset and Baseline Methods,” IEEE Access, vol. 8, pp. 181340-181355, 2020, doi: 10.1109/ACCESS. 2020.3028072.

Mistree, D. Thakor, and B. Bhatt, “Towards Indian Sign Language Sentence Recognition using INSIGNVID: Indian Sign Language Video Dataset,” IJACSA, vol. 12, no. 8, 2021, doi: 10.14569/IJACS A.2021.0120881.

“Papers with Code - ISLTranslate: Dataset for Translating Indian Sign Language.” Accessed: Feb. 18, 2024. [Online]. Available: https://paperswithcode.com/paper/isltranslate-dataset-for-translating-indian/

“Hugging Face - The AI community building the future.” Accessed: Feb. 18, 2024. [Online]. Available: https://huggingface.co/datasets

R and N. B, “ISL-CSLTR: Indian Sign Language Dataset for Continuous Sign Language Translation and Recognition,” vol. 1, Jan. 2021, doi: 10.17632/kcmpdxky7p.1.

S. Teja Mangamuri, L. Jain, and A. Sharmay, “Two Hand Indian Sign Language dataset for benchmarking classification models of Machine Learning,” in 2019 International Conference on Issues and Challenges in Intelligent Computing Techniques (ICICT), GHAZIABAD, India: IEEE, Sep. 2019, pp. 1-5, doi: 10.1109/ICICT 46931.2019.8977713.

N. Manivasagam, “Gesture Recognition System.” https://github.com/nmanivas/Gesture-Recognition-System/tree/maste r/data%20set/. [Online]. Available: https://github.com/nmanivas/Gest ure-Recognition-System/tree/master/data%20set/

J. Rekha, J. Bhattacharya, and S. Majumder, “Shape, texture and local movement hand gesture features for Indian Sign Language recognition,” in 3rd International Conference on Trendz in Information Sciences & Computing (TISC2011), Dec. 2011, pp. 30-35, doi: 10.1109/TISC.2011.6169079.

Nandy, S. Mondal, J. S. Prasad, P. Chakraborty, and G. C. Nandi, “Recognizing & interpreting Indian Sign Language gesture for Human Robot Interaction,” in 2010 International Conference on Computer and Communication Technology (ICCCT), Sep. 2010, pp. 712-717, doi: 10.1109/ICCCT.2010.5640434.

H. Sarker, “Machine Learning: Algorithms, Real-World Applications and Research Directions,” SN COMPUT. SCI., vol. 2, no. 3, pp. 160, Mar. 2021, doi: 10.1007/s42979-021-00592-x.

S. Ray, “A Quick Review of Machine Learning Algorithms,” in 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon), Faridabad, India: IEEE, Feb. 2019, pp. 35-39, doi: 10.1109/COMITCon.2019.8862451.

Jain, “Ridge and Lasso Regression in Python | Complete Tutorial (Updated 2024),” Analytics Vidhya. Accessed: Feb. 18, 2024. [Online]. Available: https://www.analyticsvidhya.com/blog/2016/01/ ridge-lasso-regression-python-complete-tutorial/

Su, S. Ju, Y. Liu, and Z. Yu, “Improving Random Forest and Rotation Forest for highly imbalanced datasets,” IDA, vol. 19, no. 6, pp. 1409-1432, Nov. 2015, doi: 10.3233/IDA-150789.

Atik, R. A. Kut, R. Yilmaz, and D. Birant, “Support Vector Machine Chains with a Novel Tournament Voting,” Electronics, vol. 12, no. 11, p. 2485, May 2023, doi: 10.3390/electronics12112485.

“Top 10 Deep Learning Algorithms in Machine Learning [2024],” ProjectPro. Accessed: Feb. 18, 2024. [Online]. Available: https://www.projectpro.io/article/deep-learning-algorithms/443

A.Wani, I. Joshi, S. Khandve, V. Wagh, and R. Joshi, “Evaluating Deep Learning Approaches for Covid19 Fake News Detection,” vol. 1402, 2021, pp. 153-163. Accessed: Feb. 18, 2024. [Online]. Available: http://arxiv.org/abs/2101.04012

K. Kumar, “Transfer learning with VGG16 and VGG19, the simpler way!,” Medium. Accessed: Feb. 18, 2024. [Online]. Available: https://koushik1102.medium.com/transfer-learning-with-vgg16-and-vgg19-the-simpler-way-ad4eec1e2997

V. Lendave, “A Comparison of 4 Popular Transfer Learning Models,” Analytics India Magazine. Accessed: Feb. 18, 2024. [Online]. Available: https://analyticsindiamag.com/a-comparison-of-4-popular-transfer-learning-models/

S. M. Ahmed, D. S. Raychaudhuri, S. Oymak, and A. K. Roy-Chowdhury, “Chapter 5 - Source distribution weighted multisource domain adaptation without access to source data,” in Handbook of Statistics, vol. 48, V. Govindaraju, A. S. R. Srinivasa Rao, and C. R. Rao, Eds., in Deep Learning, Elsevier, vol. 48, pp. 81-105, 2023. doi: 10.1016/bs.host.2022.12.001.

Downloads

Published

05-12-2023

How to Cite

Hussain, A., Saikia, N., & Dev, C. (2023). Advancements in Indian Sign Language Recognition Systems: Enhancing Communication and Accessibility for the Deaf and Hearing Impaired. Asian Journal of Electrical Sciences, 12(2), 37–49. https://doi.org/10.51983/ajes-2023.12.2.4132