This is a summary of the state of play and what options are available for where to go next.
There are two possible causes for the drift shown in Phoebe and Chloes’ videos.
- occasional I2C errors / data misreads – protective code shows this is about 0.01% of attempted reads and the code interpolates to remove significant errors making this irrelevant for such small flights
- the 3 data samples missed during motion processing – 23% of data is interpolated (3 missed reads in an elapsed 13ms) – that’s a lot of data missed!
Clearly, doing something about motion processing is necessary.
- Use the MPU-6050 FIFO: perfect solution iff, for some reason, the number of I2C read errors wasn’t so huge that the data read from the FIFO can’t be assumed to be bundles of (ax, ay, az, gx, gy, gz) every 1ms
- Speed up motion processing code to < 1ms
- Run motion processing in parallel with sensor sampling – 10 batches of samples, averaged = 10ms – much greater than the 3ms required for motion processing before next batch becomes available
- requires an I2C fix
- requires moving to pypy and change GPIO / RPIO to CFFI
- requires mutlithreads / processes, but CPython + GIL means has to be pypy again or CPython with multi-processes on a multiprocessor machine – i.e. the Quadcore A2
There isn’t an easy option here until the A2 turns up. I am prototyping option 3 on a spare B2 I have, but if this works, it cannot be deployed onto Phoebe or Chloe as a B2 is too big to fit between their top and bottom plates.
P.S. Sorry about the colo(u)r in both the videos – I’d switched the white balance to ‘shade’ and forgotten to switch it back to auto. Normal service has now resumed.