There are many automation systems that are required to work under poor visual conditions, such as auto-navigation in foggy or underwater environment. The low-visibility poses challenges for traditional feature modelling methods, which commonly contribute as a key component to such autonomous systems. It can thus negatively impact the performance of those computing systems. For example, the matching precision (matching quality) and the successfully identified matches (matching quantity) can both drop dramatically in low-visibility. On the other hand, human vision system can robustly identify visual features correctly despite the variations in lighting conditions. Inspired by human knowledge of perceiving visual features, this paper presents a novel feature modelling solution under poor visual conditions. Based on a color constancy enhanced illumination alignment, a new concept called Superpixel Flow (SPF) is proposed to model the visual features in images. SPF is generated considering the content motions across frame pairs, which make it easier to track across frames compared with classic Superpixels. The matching is achieved by a cycle-labelling strategy using Markov Random Field (MRF) with energy functions composed according to human knowledge of compare visual features. An outlier removal follows to further improve the matching accuracy. Competitive performance is demonstrated in the experiments compared with state-of-the-art approaches.