Another step of fine tuning

With the IMU sample rate set to 250 Hz (a 4ms gap between samples), there should be enough time to run motion processing for each sample without missing any; based on experience, motion processing takes 2 to 3ms. I’m currently averaging 5 samples before invoking motion processing (50Hz updates to the props). ┬áToday I did some testing setting this to 2 and 1 (i.e. 125Hz and 250Hz updates to the props).

Setting 5 samples per motion processing gives 3430 samples @ 250Hz = 13.720 seconds of flight. time.time() says it took 13.77s = 99.63% of samples were caught.

Setting 2 samples per motion processing gave 3503 samples @ 250Hz = 14.012 seconds of flight. time.time() says it took 14.08s = 99.5% of samples were caught.

Setting 1 sample per motion processing gave 3502 samples @ 250Hz = 14.08 seconds of flight time. time.time() says it took 14.22s = 98.5% of samples were caught.

I’m guessing the slight decline is due to Linux scheduling; I chose to opt for 2 samples per motion processing, which updates the props at 125Hz or every 8ms.

And boy was the flight much smoother by having the more frequent, smaller increments to the props.

And I reckoned with these faster, less-lag updates to the motors, I might be able to trust the gyro readings for longer, so I changed the complementary filter tau (incrementally) to 5s from its previous 0.5s.

The sharp sighted of you may have already seen the results in the numbers: I’ve now breached my 10s flight time target by 2 seconds (the other couple of seconds is warm-up time), with the same level of drift I could only get in 6 second flights a few weeks ago. That 10s target was what I’d set myself to then look feeding other motion processing sensors based upon the RaspiCam, either laser (Kitty) or MPEG macro-block (Kitty++) tracking.

Only down side – it’s rained nearly all day, so no change of capturing one of these long flights on video. Perhaps tomorrow?

Leave a Reply

Your email address will not be published. Required fields are marked *