The recognition of activities being undertaken is a key component in the processes needed to help enhance the home environment for assisted living. Some success of our previous work has been found with conventional machine learning approaches but the large volume of data involved with recognizing activities in video and the more accurate deep learning type approaches means that more substantial computing resources are needed to help devise and train appropriate deep learning architectures. For video content analysis, convolutional neural networks (CNN) usually have in excess of 100 million parameters that need to be calculated. Modern high performance GPUs are required to process and train models due to the time and space complexities involved. This means that we have quickly outgrown our own available computing resources.
To further investigate new and more accurate ways of recognizing activities in a home setting, we are planning to make use of our specialist research facility, the Port Echo house which includes multi-view video in each room. We aim to develop more accurate multi-dimensional CNN models to process live video feeds from these multiple cameras to aid in recognizing day to day living activities and to detect potentially dangerous events such as falls.
Many nations are facing an explosion of long term health conditions. In general, there are many conditions that require management, often for many years, outside of a hospital setting. Recognizing human daily living activities in a home environment could potentially have one of the biggest impacts on improving health provision in the home.