Sorry it’s quieter than ever – the weather is against me testing the GERMS filter code, so I’ve been twiddling my thumbs. I have found some odd behaviour from the IMU that’s worth sharing.
The SMPLRT_DIV register sets how often the data registers are updated, and therefore how often the data ready interrupt goes high. Normally the ADC sampling frequency is 1kHz, and the data registers are updated at a reduced frequency defined thus:
1kHz / (1 + SMPLRT_DIV)
Setting SMPLRT_DIV to 0 should give data ready interrupt (DRI) at 1kHz; 1 should product 500Hz, 2 should give 333Hz etc etc.
But what I’m seeing is that with SMPLRT_DIV <= 3, the DRI happens at 250Hz. SMPLRT_DIV >= 3 work fine producing 250Hz (3), 200Hz (4), 166Hz (5), 143Hz (6), 120Hz(7), 111Hz (8), 100Hz (9) etc etc etc. Because I’m using the DRI as the clock for the code, that means 250Hz is the fastest I can use. That’s no problem really – that’s plenty fast enough allowing 4ms per sample during which I can run the motion processing, but it’s not a limitation that’s been documented in the specs
In passing I’ve fixed a rounding error bug in the pre-flight RTF period. Not important really, but actually fixing it was what exposed the DRI frequency limit.
I’ll upload to GitHub in the next day or so once I’ve had a chance to check I’ve not broken anything.
First the good bit so you can skip the boring explanation if you wish!
Low drift Phoebe
I did some experimentation to test which of the four different ways the IMU hardware interrupt and my GPIO code can work together to get the most efficient and accurate readings.
The IMU data ready interrupt can either generate a 50us pulse each time the data registers are updated, or it can produce a rising edge which latches high, only falling when the data is read over I2C.
My GPIO tweaks can either capture these interrupts continuously, or just capture the next one, and then capture no more until called again.
So I ran 4 tests to see which pairing worked together best based upon the number of I2C and data corruption errors I got and how accurate the timing was which is directly related to how many sensor readings are missed during motion processing.
Other than the latching, continuous option which just blocked (predictably), it was marginal between the other three in the lab testing, but with a slight advantage shown by 50us pulse, continuous reading.
And a few test flights with Phoebe confirmed this; much reduced drift and improved timing and at 500Hz sampling she missed only 2% of samples which is the best yet by a very long way – previously it’s been more like 20%
Two final tweaks: I loosened the tests for duff data and changed the butterworth filter values:
- duff data can be identified when the sequence of ax, ay, az, temp, gx, gy, gz has some values set to -1 (0xFFFF) starting from the gz end. Previously I’d been checking gy and gz. Now I check just az, the reasoning being that errors in gyro of -1 have a very short term effect on the accuracy of angles, whereas -1 for az (which represents freefall in a vacuum) will plague a flight thereafter due to integration for velocity
- The butterworth changes were experimental to see what settings might provide better flights by filtering more acceleration out of gravity; a 0.05Hz (20s) 6th order was used in these flights compared to the previous 0.1Hz (10s) 4th order resulting in much better drift control.
I’ve been doing some fine tuning of the HoG code as a result of the FIFO trial I did. The FIFO timing came from the IMU sampling rate – there’s a sample every 1ms and so there’s no need for using time.time() when the IMU hardware clock can do it.
I’ve now applied the same to the hardware interrupt driven code; motion processing now takes less time as it was the only call to time.time(), and as such that means there’s a smaller chance of missing samples too. I’ve also added code so that when there are I2C / data corruption errors, that counts as another 1ms gap since the code then waits for the next hardware interrupt.
I’ve also removed the zero 0 calibration again, as I’m fairly convinced it’s pointless – I _think_ the deletion of (Butterworth filter extracted) gravity from acceleration means that calibration is pointless. That also adds a very marginal performance increase to motion processing.
A few test flights showed no regression (nor noticeable improvement), so I’ve uploaded the code to GitHub. You also need to grab the latest version of my GPIO library as in now imports as GPIO rather than as a clone of RPi.GPIO.
or actually, calling time.time(); ironically, in Python, this turns out to be very time consuming. I’d been tinkering with time already in this post – but the intent there was accuracy, not performance – the 20% performance increase to 240 loops per second was just a bonus.
I thought the code must not far off as fast as possible, but I did a little tinkering today to see if I could squeeze another 5 or 10% from her. The faster she loops, the better averaging of sensors I can do between updating the PIDs, given there’s little point doing so faster than about 50Hz.
To cut to the chase, I removed about 5 lines of code that decides whether she should sleep per loop and for how long (this is where the jitter code lies). That left only a single call to time.time() in the code. In doing so, her loop speed has just gone up to not far off 440 loops per second! I nearly fell of my step first time, and had to run the code several times to believe my eyes.
That now meant I could increase the data register update rate to the maximum of 1kHz, and the i2c bus speed to 400kbps, and as a result will have twice as much data to average out noise.
And that can only be a good thing 🙂
Have a look at PiStuffing/Quadcopter to see all the changes made relating to
- reference frames
- timing accuracy
- PID selection
- performance and averaging
I’ve also added a very early alpha draft of the PDF guide about quadcopters in general and Phoebe in particular.
Oh, and I forgot to mention that with the code changes for reference frames, timing, PIDs and averaging, she’s flying much better now – less drift, and much more stable. Not quite worthy of a video yet, but soon I hope.
I’ve seen a few on-line conversations adamantly stating that quadcopter control needs a real-time OS (RTOS) or microcontroller to ensure accurate timing but with zero solid evidence to back it up. I’ve always been adamant this is just viral hearsay, and I’m being the heretic by coding in Python on Linux.
Of course the PWM signal to the ESCs needs absolute accuracy, but luckily the Raspberry Pi does provide hardware PWM.
Anyway, yesterday, I got to do some testing on the newly restructured code, and the results were underwhelming. Not terrible but not the radical change I was expecting. The only possible cause I could see in the stats was noise swamping accelerometer data.
I’m now not sure how exactly that led to a train of thought about fine-tuning time in my code, but it did.
The net result now is that there is a single time-stamp in the code taken immediately after I read a new batch of sensor data. From that point onwards, time effectively stops for the processing of that data by the rest of the code. All integration, averaging, and PID integration and differential processing uses that single timestamp. That makes for best accuracy in all three(ish) areas. It also has speeded the code up by another 5% (all 7 PIDs had their own time tracking which has now been removed). This means I can also reduce noise by including more sensors samples in the integration and averaging code before feeding it to the PIDs at the same rate as before. So beneficial in lots of ways.
Timing innaccuracy probably wasn’t the cause of the accelerometer noise, but at the same time, improving it has to be “a good thing”TM. My best guess as to the cause of the noise is a dodgy motor which if spun manually, sounds like it has muck in its bearings. Also the new high power props spin slower bringing more prop noise in under the MPU6050 dlpf.
I’m now hoping the timing rework, combined with lower dlpf and a new motor will cut the noise down significantly allowing me to see the results of the code rework.