dc.description.abstract |
Research has been done to automatically classify music genres based on their sound for
music information retrieval (MIR) systems, music data analysis, and music transcription
purposes. When the amount of music data increases, indexing and retrieving it becomes
more challenging. Previous studies in this area had concentrated on the song
classification, identification, prediction, and distinction of their music genre for modern
music services. Aquaquam zema classification is one category of music information
retrieval.
One of the traditional forms of education in Ethiopian Orthodox Tewahdo Church
(EOTC) is aquaquam zema, in which the priests perform with a measured sound while
dressed secular clothes. It is closely related to music and rhythm because it is a secular
art. The knowledge gap between modern and traditional education on the zema genre is
primarily what pushes us in this approach because the majority of students in this
traditional school do not have a complete understanding of the zema genre. We create a
model to categorize the sound signal of aquaquam zema into their genre to help with this
constraint. Five major zema kinds can be used to categorize aquaquam zema Zimame
(ዝማሜ), Qum (Tsinatsel)(ቁም), Meregd (መረግድ), Tsifat (ጽፋት) and Amelales
(አመላለስ).
We obtained the audio data from the Aquaquam bet and recorded it using smartphones
and the website to get this classification. After data collection, we begin to preprocess
audio, segment audio with predetermined lengths, convert audio to visual
representations, and produce spectrogram images. Then we create a model by extracting
features and classifying them using a deep learning approach. We created a full-featured
CNN model using the Softmax classifier. We achieve 97.5% of training accuracy and
91.76% for test accuracy by using the proposed model.
Keywords: Aquaquam Zema, Deep Learning, spectrogram, Feature Extraction and
Classification |
en_US |