Correlation Filter (CF)-based tracking methods have demonstrated excellent performance owing to their dense sample strategy and computational efficiency. However, CF tracker suffers from several drawbacks. First, the training samples are generated by circulant rigid translation from a fixed viewpoint, which results in less robustness against target appearance variation. Second, CF tracker derives its optimal solution based on an image patch centered at the previous target position without considering context information, which is prone to suboptimal solutions. Lastly, the CF-based tracker cannot handle model degradation resulting from false updating and error accumulation. In this paper, we propose a new tracking method based on two calibrated Kinect sensors. We exploit target appearance from two perspectives and the background context to reformulate a CF tracker that is robust to target appearance variation in the tracking process. Meanwhile, our tracker can maximize the margin between target and background in a unified CF framework. To prevent tracking model degradation resulting from false updating, we propose an adaptive model update strategy by exploiting the response distribution and prior tracking result. Extensive experimental results demonstrate the effectiveness and robustness of the proposed method against state-of-the-art tracking methods.