TY - GEN
T1 - Audio interval retrieval using convolutional neural networks
AU - Kuzminykh, Ievgeniia
AU - Shevchuk, Dan
AU - Shiaeles, Stavros
AU - Ghita, Bogdan
N1 - Funding Information:
This project has received funding from the European Union Horizon 2020 research and innovation programme under grant agreement no. 833673 and no. 786698.
Publisher Copyright:
© Springer Nature Switzerland AG 2020.
Copyright:
Copyright 2021 Elsevier B.V., All rights reserved.
PY - 2020/12/22
Y1 - 2020/12/22
N2 - Modern streaming services are increasingly labeling videos based on their visual or audio content. This typically augments the use of technologies such as AI and ML by allowing to use natural speech for searching by keywords and video descriptions. Prior research has successfully provided a number of solutions for speech to text, in the case of a human speech, but this article aims to investigate possible solutions to retrieve sound events based on a natural language query, and estimate howeffective and accurate they are. In this study, we specifically focus on theYamNet, AlexNet, and ResNet-50 pre-trained models to automatically classify audio samples using their respective melspectrograms into a number of predefined classes. The predefined classes can represent sounds associated with actions within a video fragment. Two tests are conducted to evaluate the performance of the models on two separate problems: Audio classification and intervals retrieval based on a natural language query. Results show that the benchmarked models are comparable in terms of performance, with YamNet slightly outperforming the other two models. YamNet was able to classify single fixed-size audio samples with 92.7% accuracy and 68.75% precisionwhile its average accuracy on intervals retrieval was 71.62% and precision was 41.95%. The investigated method may be embedded into an automated event marking architecture for streaming services.
AB - Modern streaming services are increasingly labeling videos based on their visual or audio content. This typically augments the use of technologies such as AI and ML by allowing to use natural speech for searching by keywords and video descriptions. Prior research has successfully provided a number of solutions for speech to text, in the case of a human speech, but this article aims to investigate possible solutions to retrieve sound events based on a natural language query, and estimate howeffective and accurate they are. In this study, we specifically focus on theYamNet, AlexNet, and ResNet-50 pre-trained models to automatically classify audio samples using their respective melspectrograms into a number of predefined classes. The predefined classes can represent sounds associated with actions within a video fragment. Two tests are conducted to evaluate the performance of the models on two separate problems: Audio classification and intervals retrieval based on a natural language query. Results show that the benchmarked models are comparable in terms of performance, with YamNet slightly outperforming the other two models. YamNet was able to classify single fixed-size audio samples with 92.7% accuracy and 68.75% precisionwhile its average accuracy on intervals retrieval was 71.62% and precision was 41.95%. The investigated method may be embedded into an automated event marking architecture for streaming services.
KW - Audio classification
KW - Convolutional neural network
KW - Deep learning
KW - Intervals retrieval
KW - Natural language query
UR - http://www.scopus.com/inward/record.url?scp=85101999585&partnerID=8YFLogxK
UR - http://www.new2an.org/#/
U2 - 10.1007/978-3-030-65726-0_21
DO - 10.1007/978-3-030-65726-0_21
M3 - Conference contribution
AN - SCOPUS:85101999585
SN - 9783030657253
SN - 9783030657260
T3 - Lecture Notes in Computer Science
SP - 229
EP - 240
BT - Internet of Things, Smart Spaces, and Next Generation Networks and Systems - 20th International Conference, NEW2AN 2020 and 13th Conference, ruSMART 2020, Proceedings
A2 - Galinina, Olga
A2 - Andreev, Sergey
A2 - Balandin, Sergey
A2 - Koucheryavy, Yevgeni
PB - Springer
T2 - 20th International Conference on Next Generation Teletraffic and Wired/Wireless Advanced Networks and Systems, NEW2AN 2020 and 13th Conference on the Internet of Things and Smart Spaces, ruSMART 2020
Y2 - 26 August 2020 through 28 August 2020
ER -