Abstract:
The classification of holy pictures, particularly within the Ethiopian Orthodox Tewahedo Church
(EOTC), presents a significant challenge due to the diverse cultural influences and intricate historical
backgrounds shaping these images. This research addresses the pressing need for an accurate and
efficient classification framework capable of distinguishing between authentic and manipulated holy
pictures. Previous methodologies have struggled to capture the nuanced variations in color, object
types, positions, orientations, and border characteristics inherent in these images, leading to
suboptimal classification accuracy. To bridge this gap, we propose a novel deep learning-based
approach that leverages object detection techniques, color feature extraction, and advanced neural
network architectures to enhance classification accuracy. Our experimental evaluation, conducted on a
comprehensive dataset comprising 8208 holy pictures, demonstrates the efficacy of the proposed
framework. By integrating YOLOv8 object detection with the XceptionV3 model and incorporating
channel attention and color space features, we achieved significant improvements in classification
accuracy. Specifically, our model attained an accuracy of 96.5%, a precision of 97.3%, a recall of
95.7%, and an F1-score of 96.4%. These results underscore the effectiveness of our approach in
accurately classifying holy pictures and distinguishing between authentic and manipulated images.
Keywords: channel attention, color space; YOLOv8; object detection