Let's see now how we can enhance our application and leverage the Kinect sensor's Natural User Interface (NUI) capabilities.
We implement a manager that, using the skeleton data, is able to interpret a body motion or a posture and translate the same to an action as "click". Similarly, we could create other actions as "zoom in". Unfortunately, the Kinect for Windows SDK does not provide APIs for recognizing gestures, so we need to develop our custom gesture recognition engine.
Gesture detection can be relatively simple or intensely complex depending on the gesture and the environment (image noise, scene with more users, and so on).
In literature there are many approaches for implementing gesture recognition, the most common ...