Many techniques in computer vision and elsewhere rely on features that are defined at some point in scale. Furthermore, scale invariance is a well known desirable property of machine learning and other intelligent systems. This work is concerned with the development of techniques that are scale invariant, primarily through the use of multiscales. Features can be computed at a number of scales and then combined in some way which enables further processing possible, such as feature extraction and or machine learning. This work has investigated multiple approaches to multiscale feature extraction including in 3D voxel spaces to describe volumetric imaging data and also to enable ready comparison between imaging modalities with highly different image scales, i.e. Scanning Electron Microscopy (SEM) and X-ray Computer Tomography (XCT). Another approach investigated has included automated methods of feature learning and extraction through deep learning techniques of video sequences depicting actions. This has enabled improved recognition of actions in these video sequences.