Deep CNN and SimpleRNN Architectures for Bi-Handed Arabic Alphabet Hand-Gesture Recognition
This research introduces ArASL-EduNet for bi- handed static Arabic alphabet hand-gesture recognition from RGB images. The framework is built on a curated dataset of 11,200 labeled images covering 56 balanced gesture classes derived from the 28 Arabic letters, with distinct left- and right- hand realizations for each letter. Four deep learning models were evaluated under the same protocol: CNN, MobileNetV2, EfficientNetB0, and SimpleRNN. Results show that CNN achieves the best performance with 99
