Abstract
An uncontrolled fire can be regarded as an unfortunate disastrous phenomenon that can cause property and environmental damage. This hazard can also represent a significant risk to human safety; therefore, early fire detection is critical. However, conventional smoke detectors are inappropriate for open spaces because fire detection may be delayed until the smoke particles are sufficiently close to the sensors. Moreover, detection could be prevented by the wind. Hence, one appropriate solution to an early warning detection system in an open space involves utilising expertly designed vision-based fire detection technology incorporated into surveillance cameras. Any area within the camera’s coverage can be processed rapidly. However, modelling the chaotic behaviour and variations of fire appearance in different environmental conditions is a rather tricky task. Therefore, many current vision-based fire detection systems are vulnerable to a false alarm and may fail in some situations. The complexity of these systems varies considerably. In aggregate, they employ algorithms to analyse features such as colour, spatial texture, motion, temporal intensity variations and dynamic patterns to detect fire in a video sequence. However, these systems’ structures remain relatively rigid, dedicating the same processing stages regardless of the inspected scene. The most sophisticated systems have many cascaded blocks that are only necessary for challenging fire situations, and they are more prone to missed detection. The detection rate is higher in the least complex systems. However, they are more likely to report a false alarm due to fire-coloured objects that mimic the spatio-temporal behaviour of fire.Therefore, this thesis focuses on incorporating contextual information into the existing computer vision-based fire detection algorithms to guarantee their detection accuracy in complex environments. Furthermore, this thesis exploits the concept of a set of fire detection solutions bounded together by rules based on the complexity of the site under surveillance to enhance the performance and reliability of the proposed vision-based fire detection system. In particular, this thesis’ novel contributions include: (1) A new colour space for fire pixel representation (2) A novel colour model that adapts to the illumination complexity of the scene (3) An adaptive vision-based fire detection system that automatically adjusts to the complexities of the scene under surveillance.
Following a critical and comprehensive analysis of research literature, a new colour space with a fire colour differentiating property is introduced. The significant advantage of this colour space is its ability to separate fire and non-fire pixels into two classes of intensity. Furthermore, fire and non-fire pixels are clustered into two distinct regions in the new colour space; this provides a robust foundation for fast and efficient fire pixel detection. A qualitative comparison with other colour spaces in the literature suggests that the proposed colour space performs better regarding fire pixel representation.
In addition, this study explored how to identify fire pixels in the new colour space effectively. An adaptive colour model is proposed to address the shortcomings of fixed thresholds and alleviate the problem associated with illumination change. The proposed model automatically classifies an image as low, medium, or high intensity. Based on this classification, the fire colour segmentation threshold is derived dynamically. The experimental results on benchmark datasets suggest that the proposed colour model outperforms the state-of-the-art colour model.
Besides colour processing, this thesis captures the fire’s features in the spatial and temporal domain to form an integrated fire detection framework. The model operates with five different process states bounded together by embedded rules, which tracks the background’s colour complexity, illumination condition, percentage of moving pixels in the video scenes, and the dominant colour of the candidate fire region. These processing blocks are activated depending on the proposed embedded rules so that only the relevant set of fire detection solutions are selected. Another contribution made in this study is the use of dynamic thresholds in the frame differencing algorithm to detect moving regions; the proposed algorithm dynamically sets the thresholds to detect moving pixels depending on the noise level in each video frame. The experimental results on the benchmark video suggest that the proposed system outperforms the state-of-the-art systems.
This study advances computer vision-based fire detection knowledge by providing multiple novel contributions, enhancing accuracy and reliability detection. The developed works, experimentation and deductions give comprehensive information about the main techniques used in computer vision-based fire detection systems. This work can be developed further by facilitating new ways to capture the complexities of video scenes and injecting more contextual information into the vision-based fire detection algorithm to consolidate the detection of a real fire and increase the reliability of fire detection systems.
Date of Award | Mar 2022 |
---|---|
Original language | English |
Awarding Institution |
|
Supervisor | Abdsamad Benkrid (Supervisor) & Branislav Vuksanovic (Supervisor) |