Gelişmiş Arama

Basit öğe kaydını göster

dc.contributor.authorTanko, Dahiru
dc.contributor.authorDoğan, Şengül
dc.contributor.authorDemir, Fahrettin Burak
dc.contributor.authorBaygın, Mehmet
dc.contributor.authorŞahin, Şakir Engin
dc.contributor.authorTuncer, Turker
dc.date.accessioned2022-03-15T12:22:49Z
dc.date.available2022-03-15T12:22:49Z
dc.date.issued2022en_US
dc.identifier.citationTanko, D., Dogan, S., Demir, F. B., Baygin, M., Sahin, S. E., & Tuncer, T. (2022). Shoelace pattern-based speech emotion recognition of the lecturers in distance education: ShoePat23. Applied Acoustics, 190, 108637.en_US
dc.identifier.issn0003682Xen_US
dc.identifier.urihttps://hdl.handle.net/20.500.12899/643
dc.description.abstractBackground and objective: We are living in the pandemic age, and many educational institutions have shifted to a distance education system to ensure learning continuity while at the same time curtailing the spread of the Covid-19 virus. Automated speech emotion classification models can be used to measure the lecturer's performance during the lecture. Material and method: In this work, we collected a new lecturer's speech dataset to detect three emotions: positive, neutral, and negative. The dataset is divided into segments with a length of five seconds per segment. Each segment has been utilized as an observation and contains 9541 observations. To automatically classify these emotions, a hand-modeled learning approach is presented. This approach has a comprehensive feature extraction method. In the feature extraction, a shoelace-based local feature generator is introduced, called Shoelace Pattern. The suggested feature extractor generates features at a low level. To further improve the feature generation capability of the Shoelace Pattern, tunable q wavelet transform (TQWT) is used to create sub-bands. Shoelace Pattern generates features from raw speech and sub-bands, and the proposed feature extraction method selects the most suitable feature vectors. The top four feature vectors are selected and merged to obtain the final feature vector. By deploying neighborhood component analysis (NCA), we chose the most informative 512 features, and these features are classified using a support vector machine (SVM) classifier using 10-fold cross-validation. Results: The proposed learning model based on the shoelace pattern (ShoePat23) attained 94.97% and 96.41% classification accuracies on the collected speech databases consecutively. Conclusions: The findings demonstrate the success of the ShoePat23 on speech emotion recognition. Moreover, this model has been used in the distance education system to detect the performance of the lecturers.en_US
dc.language.isoenen_US
dc.publisherElsevier Ltden_US
dc.relation.ispartofApplied Acousticsen_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectDistance educationen_US
dc.subjectNCAen_US
dc.subjectShoelace Patternen_US
dc.subjectSpeech emotion recognitionen_US
dc.subjectSVMen_US
dc.titleUzaktan eğitimde öğretim elemanlarının ayakkabı bağı desenine dayalı konuşma duygu tanıma: ShoePat23en_US
dc.title.alternativeElsevier Ltden_US
dc.typeArticleen_US
dc.authorid0000-0002-3298-0109en_US
dc.departmentMTÖ Üniversitesi, Akçadağ Meslek Yüksekokulu, Bilgisayar Teknolojileri Bölümüen_US
dc.institutionauthorŞahin, Şakir Engin
dc.identifier.doi10.1016/j.apacoust.2022.108637
dc.identifier.volume190en_US
dc.identifier.issue10867en_US
dc.identifier.startpage1en_US
dc.identifier.endpage9en_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.identifier.scopus2-s2.0-85123631772en_US
dc.identifier.wosWOS:000807404100009en_US
dc.identifier.wosqualityQ1en_US
dc.indekslendigikaynakWeb of Scienceen_US
dc.indekslendigikaynakScopusen_US


Bu öğenin dosyaları:

Thumbnail

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster