Detection using mask adaptive transformers in unmanned aerial vehicle imagery

Huibiao Ye*, Weiming Fan, Yuping Guo, Xuna Wang, Dalin Zhou

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Drone photography is an essential building block of intelligent transportation, enabling wide-ranging monitoring, precise positioning, and rapid transmission. However, the high computational cost of transformer-based methods in object detection tasks hinders real-time result transmission in drone target detection applications. Therefore, we propose mask adaptive transformer (MAT) tailored for such scenarios. Specifically, we introduce a structure that supports collaborative token sparsification in support windows, enhancing fault tolerance and reducing computational overhead. This structure comprises two modules: a binary mask strategy and adaptive window self-attention (A-WSA). The binary mask strategy focuses on significant objects in various complex scenes. The A-WSA mechanism is employed to self-attend for balance performance and computational cost to select objects and isolate all contextual leakage. Extensive experiments on the challenging CarPK and VisDrone datasets demonstrate the effectiveness and superiority of the proposed method. Specifically, it achieves a mean average precision ([email protected]) improvement of 1.25% over car detector based on you only look once version 5 (CD-YOLOv5) on the CarPK dataset and a 3.75% average precision ([email protected]) improvement over cascaded zoom-in detector (CZ Det) on the VisDrone dataset.

Original languageEnglish
Pages (from-to)113-120
Number of pages8
JournalOptoelectronics Letters
Volume21
Issue number2
Early online date26 Dec 2024
DOIs
Publication statusPublished - 1 Feb 2025

Fingerprint

Dive into the research topics of 'Detection using mask adaptive transformers in unmanned aerial vehicle imagery'. Together they form a unique fingerprint.

Cite this