The need for speed!

My code has been running at about 450 loops per second, and it seemed that whatever I tried to speed it up was having little effect.  The data from the MPU6050 was being updated 1000 times per seconds, so surely I could do better than 450?

Eventually, I started to suspect my customized GPIO python library was the cause – it waits for a hardware interrupt signalling fresh data is available – it calls epoll_wait() twice – could this explain why?  Is it only catching every other interrupt and hence reducing the speed to a maximum of 500 loops per second? It seemed plausible so I changed the code, and sure enough, processing speed has gone up to 760 loops per second.  The missing 240 loops are due to python motion processing, so now I can fine tune these and expect to get even better results.

Why does this matter?  By nearly doubling the amount of data received in a fixed period, I can get better averaging over that period, which means I can increase dlpf to a higher frequency, and so reduce the risk of filtering out good data.

I’ve updated the code on GitHub – you’ll need to remove the current GPIO code first before unpacking the GPIO.tgz and running the install thus.

sudo apt-get remove python-rpi.gpio
tar xvf GPIO.tgz
cd GPIO
sudo python setup.py install

Next step was to see what refinements I can make to the python code to speed the sensor data processing further. I moved the calibration and units checking from the main loop to the motion processing, and that upped the speed to 812 loops per seconds.

Now all I need to do is test it live!

4 thoughts on “The need for speed!

  1. Hi! It’s been a while since I commented on your blog!…

    Another idea to improve speed is to write the critical code in C and call the C functions from Python (it’s actually pretty easy with tools like SWIG), python been then only used as a higher level “integrator” , keeping the low level code where it belongs for performance.

    • It’s been a while since there’s been much worth commenting on! I’ve missed your insight! Hope you had a great Christmas.

      That’s a fine idea about pushing the time consuming code out to a ‘C’ library – in this case, it’s now the motion processing that’s filling the gap between the 800 loops and the 1kHz of data. It means I’m missing a block of data during each motion processing portion. But the code there is very simple to convert to ‘C’ and I’ve written ‘C’ for the last 25 years, day-in, day-out (which is why I chose to do this in python!).

      I’ll see how tomorrow’s testing goes – moving motion processing to ‘C’ is a very good step in clearly the right direction, thanks.

  2. Very nice work. Would the higher looping speed also make it smoother? Also… I’m not well versed at PDI control but is the average error over time also an issue here with your code?

    • Yes, the higher looping speed can make it smoother because the averaging over time includes more samples, so spikes of noise tend to cancel each other out better. The average error over time should also be improved by including more of the available data – the more data that gets into the averaging, the more accurate the average.

      I’ve done a couple of test runs and the graphs look a lot smoother which is always a very good sign! They also show a bug in the code and without the noise, it now stands out very clearly. Sadly daylight and weather mean I can’t test the fix today, but it’ll be priority 1 tomorrow (if the weather holds!).

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.