Abstract
Video anomaly detection is an important task in the field of intelligent security. However, existing methods mainly detect and analyze videos from a single time direction, ignoring the semantic information of the video context, which adversely affects the detection accuracy. To address this issue, we design a multi-branch generative adversarial network with context learning (MGAN-CL) to detect abnormal events. In particular, we combine video context information to generate predicted frames, and determine whether an anomaly occurs by comparing the predicted frame with the actual frame. Different from the existing GAN-based methods, in the anomaly event detection stage, we use the discriminator to judge the video frames generated by the generator, which improves the accuracy of anomaly detection. In order to improve the ability of the discriminator, a pseudo-anomaly module is added to the discriminator for data augmentation to improve the robustness of the model. An extensive set of experiments performed on public datasets demonstrate the method’s superior performance.
Original language | English |
---|---|
Number of pages | 12 |
Journal | IEEE Transactions on Circuits and Systems for Video Technology |
Volume | 14 |
Issue number | 8 |
Early online date | 18 Oct 2023 |
DOIs | |
Publication status | Early online - 18 Oct 2023 |
Keywords
- Anomaly detection
- bidirectional prediction
- Feature extraction
- generative adversarial network
- Generative adversarial networks
- Generators
- pseudo-anomaly module
- Task analysis
- Training
- video anomaly detection
- video context information
- Videos