The h.264 macro-block data that the GPU spits out can’t be used for an accurate, long term motion tracking sensor complementing the integrated accelerometer for short term accuracy and high resolution. Period 🙁
It works perfectly detecting parts of a video frame that have moved since the previous video frame, and therefore optimize compression of the new frame by only including the changes to the previous frame. That’s what it’s designed to do and it does it well.
But what’s required for my horizontal motion tracking is an accurate measure of how much and in which direction every pixel has moved from one frame to the next. And it almost does a perfect job of this, except for two reasons which completely rule out its usage for this cause:
- periodically, the macro-blocks are reset for the frame – that means there is no measure of the velocity of pixels compared to the previous frame.
- each macro-block motion vector is accompanies with a sum of absolute differences (SAD) – a numerical value for the level of confidence in the vectors accuracy. Testing yesterday revealed that these values are big and fluctuate smoothly across a large range of values for a pair of frames without obvious high and low confidence values. And that means it’s not possible to combine the vectors and the SAD values to get an accurate vector for the whole frame.
So it’s back to the laser dot tracking as the only possible solution until the Kickstarter LiDAR I backed is launched.