PSL SignBank: A Multimodal Machine Readable Dictionary for Pakistan Sign Language
DOI:
https://doi.org/10.21015/vtse.v14i1.2246Abstract
The deaf community in Pakistan faces significant communication barriers due to the absence of standardized, machine-readable resources for Pakistani Sign Language (PSL). To address this challenge, SignBank for Pakistani Sign Language (PSL SignBank) has been developed as a machine-readable dictionary to preserve and promote PSL. The corpus includes 300 commonly used English words. Each word is translated into Urdu and encoded using HamNoSys notation language-independent phonetic transcription system. Each entry of dictionary integrate multiple modalities including English word, Urdu translation, HamNoSys vector representation, human signer video, and avatar-generated animation via SiGML (Signing Gesture Markup Language) rendering. The development of corpus involved systematic video recording with deaf participants from multiple institutions. This procedure was followed by the team of three sign language experts and two interpreters who verified gestural accuracy in the shape, movement, and location parameters of the hand. Compared to traditional video-based dictionaries, PSL SignBank achieved approximately 95% storage reduction with HamNoSys notation requiring around 1 KB per sign versus 1 MB for video and supports scalable sentence-level translation through the concatenation of machine-readable notations. The avatar-based rendering system was validated against human signer videos which confirmed accurate gesture reproduction for both static and dynamic signs. This work establish a foundational infrastructure for computational PSL applications that include text-to-sign translation systems, sign language recognition models, and educational platforms. PSL SignBank represents a critical advance towards accessibility, digital inclusion, and empowerment of Pakistan's deaf community and it also provide a replicable framework for under-resourced sign language documentation globally.
References
S. Goldin-Meadow and D. Brentari, “Gesture, sign, and language: The coming of age of sign language and gesture studies,” Behav. Brain Sci., vol. 40, pp. 1–17, 2017.
H. Hafit, C. W. Xiang, M. M. Yusof, N. Wahid, and S. Kassim, “Malaysian sign language mobile learning application: A recommendation app to communicate with hearing-impaired communities,” Int. J. Electr. Comput. Eng., vol. 9, no. 6, pp. 5512–5518, 2019.
W. N. Khotimah, T. Anggita, and N. Suciati, “Indonesian sign language recognition using Kinect and dynamic time warping,” Indones. J. Electr. Eng. Comput. Sci., vol. 15, no. 1, pp. 495–503, 2019.
T. M. Angona et al., “Automated Bangla sign language translation system for alphabets by means of MobileNet,” Telkomnika, vol. 18, no. 3, pp. 1292–1301, 2020.
M. Kumar, “Conversion of sign language into text,” Int. J. Appl. Eng. Res., vol. 13, no. 9, pp. 7154–7161, 2018.
M. Brour and A. Benabbou, “ATLASLang MTS 1: Arabic text language into Arabic sign language machine translation system,” Procedia Comput. Sci., vol. 148, pp. 236–245, 2019.
H. Luqman and S. A. Mahmoud, “Automatic translation of Arabic text-to-Arabic sign language,” Univ. Access Inf. Soc., pp. 1–13, 2018.
Arabic Sign Language Dictionary. [Online]. Available: http://www.menasy.com/
A. H. Aliwy and A. A. Alethary, “Development of Arabic sign language dictionary using 3D avatar technologies,” 2018.
N. Sulman and S. Zubairi, “Pakistan Sign Language – A Synopsis,” 2018.
F. Buttussi, L. Chittaro, and M. Coppo, “Using Web3D technologies for visualization and search of signs in an international sign language dictionary,” in Proc. 12th Int. Conf. 3D Web Technol., 2007, pp. 61–68.
V. Athitsos et al., “The American Sign Language Lexicon Video Dataset,” in Proc. CVPR Workshops, 2008.
E. Gutierrez-Sigut, B. Costello, C. Baus, and M. Carreiras, “LSE-Sign: A lexical database for Spanish Sign Language,” Behav. Res. Methods, vol. 48, no. 1, pp. 123–137, 2016.
L. Goyal and V. Goyal, “Development of Indian Sign Language dictionary using synthetic animations,” Indian J. Sci. Technol., vol. 9, no. 32, 2016.
Y. Nagashima and K. Watanabe, “Medical sign language dictionary with 3D animation viewer,” in Proc. Int. Conf. Adv. Comput., Commun. Serv., 2017, pp. 19–20.
Y. O. M. Elhadj, Z. Zemirli, and B. Al-Faraj, “Towards a unified 3D animated dictionary for Saudi sign language,” in Proc. Int. Conf. Adv. Comput., Commun. Informat., 2012, pp. 910–916.
A. H. Alqallaf, “Development of a web-based unified Arabic/American sign language bilingual dictionary,” J. Eng. Res., vol. 6, 2018.
R. Kennaway, “Synthetic animation of deaf signing gestures,” in Proc. Int. Gesture Workshop, 2002, pp. 146–157.
S. Diwakar and A. Basu, “A multilingual multimedia Indian sign language dictionary tool,” in Proc. 6th Workshop Asian Lang. Resour., 2008, pp. 57–64.
K. Kaur and P. Kumar, “HamNoSys to SiGML conversion system for sign language automation,” Procedia Comput. Sci., vol. 89, pp. 794–803, 2016.
D. S. Sharma et al., “Automatic translation of English text to Indian Sign Language synthetic animations,” NLP Assoc. India, pp. 144–153, 2016.
J. A. Bangham et al., “Virtual signing: Capture, animation, storage and transmission—An overview of the ViSiCAST project,” IEE Colloq. Dig., no. 25, pp. 23–29, 2000.
R. Elliott, J. Glauert, V. Jennings, and R. Kennaway, “An overview of the SiGML notation and SiGML signing software system,” in Proc. Int. Conf. Lang. Resour. Eval., 2004, pp. 98–104.
K. Ayadi, Y. O. M. Elhadj, and A. Ferchichi, “Prototype for learning and teaching Arabic sign language using 3D animations,” in Proc. Int. Conf. Intell. Auton. Syst., 2018, pp. 51–57.
Z. Kang, “Spoken language to sign language translation system based on HamNoSys,” in Proc. ACM Int. Conf., 2019, pp. 159–164.
I. Zwitserlood, M. Verlinden, J. Ros, and S. Van Der Schoot, “Synthetic signing for the deaf: eSIGN,” in Proc. Assistive Technol. Vision Hearing Impairment, 2004.
Q. L. Da, N. H. D. Khang, and N. C. Ngon, “Converting Vietnamese television news into 3D sign language animations for the deaf,” Adv. Intell. Syst. Comput., vol. 257, 2019.
J. Fenlon, K. Cormier, and A. Schembri, “Building BSL SignBank: The lemma dilemma revisited,” Int. J. Lexicogr., vol. 28, no. 2, pp. 169–206, 2015.
A. A. Haseeb and A. Ilyas, “Speech translation into Pakistan Sign Language,” 2019.
National Institute of Special Education, Pakistan Sign Language Book I and II, Government of Pakistan, 1991.
FESF, Signs to Learn – Deaf Reach, 2017. [Online]. Available: https://www.deafreach.com/wp-content/uploads/2017/04/FESF-Signs-to-Learn.pdf
IDRC, ICT Assisted Learning Tool for the Deaf in Pakistan, 2002. [Online]. Available: http://web.idrc.ca/en/ev-22754-201-1-DO_TOPIC.html
N. S. Khan et al., “Speak Pakistan: Challenges in developing Pakistan Sign Language using information technology,” South Asian Stud., vol. 30, no. 2, pp. 200–215, 2014.
M. S. Farooq, N. H. Memon, A. Akram, and M. N. Memon, “An evaluation framework and comparative analysis of the widely used first programming languages,” PLoS ONE, vol. 9, no. 2, p. e90202, 2014.
M. S. Farooq, N. H. Memon, A. Akram, and M. N. Memon, “A qualitative framework for introducing programming language at high school,” Journal of Quality and Technology Management, vol. 8, pp. 135–151, 2012.
U. Akram, M. S. Farooq, and N. H. Memon, “A study on RE process models for offshore software development,” Journal of Basic and Applied Scientific Research, vol. 4, no. 4, pp. 114–119, 2014.
B. Hassan, M. S. Farooq, and N. H. Memon, “Requirement engineering practices in Pakistan software industry: Major problems,” Journal of Applied Environmental and Biological Sciences, vol. 4, no. 7S, pp. 391–397, 2014.
M. S. Mirza, N. Javaid, T. Kalsoom, and M. Aamir, “Vision-based Pakistani sign language recognition using deep learning,” PLOS ONE, vol. 17, no. 12, p. e0278192, 2022.
I. A. Adeyanju, O. Adewale, and K. O. Adegoke, “Machine learning methods for sign language recognition: A critical review and analysis,” Patterns, vol. 2, no. 11, p. 100389, 2021.
R. Rastgoo, S. Escalera, and M. Miron, “A survey on recent advances in Sign Language Production,” Expert Systems with Applications, vol. 243, p. 122846, 2024.
M. Al-Qurishi, T. Khalid, and R. Souissi, “Deep learning for sign language recognition: Current techniques, benchmarks, and open issues,” IEEE Access, vol. 9, pp. 126917–126951, 2021.
M. Gil-Martín, J. A. Ordóñez, and J. B. Alonso, “Sign language motion generation from sign characteristics (HamNoSys phonemes),” Sensors, vol. 23, no. 20, p. 9365, 2023.
N. Naz, M. A. Khan, and S. Ali, “Advancing dynamic Pakistani sign language recognition with skeleton-based datasets,” Computer Vision and Image Understanding, 2025.
S. Naeem, F. Ahmed, and M. Iqbal, “Pakistani word-level sign language recognition based on lightweight models,” in Proc. AAAI Spring Symposium, 2025.
S. I. Ahmad, N. S. Khan, A. Abid, and A. Hussain, “Sign Assist: Real-Time Isolated Sign Language Recognition and Translator Model Connecting Sign Language Users with GPT Model,” in Proc. 3rd COG-MHEAR Workshop on Audio-Visual Speech Enhancement (AVSEC), Kos, Greece, 2024, pp. 82–88.
U. Farooq, M. S. M. Rahim, N. Sabir, A. Hussain, and A. Abid, “Advances in machine translation for sign language: Approaches, limitations, and challenges,” Neural Computing and Applications, vol. 33, no. 21, pp. 14357–14399, 2021.
N. S. Khan, A. Abid, and K. Abid, “A Novel Natural Language Processing (NLP)--Based Machine Translation Model for English to Pakistan Sign Language Translation,” Cognitive Computation, vol. 12, no. 4, pp. 748–765, 2020.
U. Farooq, M. S. M. Rahim, N. Sabir, S. Rasheed, and A. Abid, “A Crowdsourcing-Based Framework for the Development and Validation of Machine Readable Parallel Corpus for Sign Languages,” IEEE Access, vol. 9, pp. 91788–91806, 2021.
Downloads
Published
How to Cite
Issue
Section
License
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License (CC-By) that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
This work is licensed under a Creative Commons Attribution License CC BY