Currently accelerometer readings are taken 450 times a second and are ‘integrated’ over time, and used every 37 times a second by dividing by the total time spent taking the last batch of integrated samples was taken. That gives a very accurate average of acceleration over that block. But it is lumpy and I started wondering if it could be improved.

I’d considered a rolling average in the past, but that doesn’t account for the fact data isn’t read at equal periods – the sensor provides updated accelerometer readings at 1kHz, and the code picks up the next one and therefore sometimes skips values. It’s important to take into account the time between each pair of samples; that’s what the integration does, and that’s what the rolling average can’t, unless…

A rolling average looks a bit like this: the N-th values looks like the previous value plus the change since last time. In this case, α is a fixed constant < 1.

a_{n}= a_{n-1}+ α * (a_{now}- a_{n-1})

or

a_{n}= (1 - α) * a_{n-1}+ α * a_{now}

And this looks awfully similar to both the Kalman and Complementary filters I mentioned the other day.

So what would happen if I used a complementary filter style τ gain which is time sensitive rather than the rolling average α gain?

Dunno without testing it, so I did. And it wasn’t bad – she kind swooped across her take-off point several times, but she was losing height throughout – that problems been there for a while due to discrepancies between calibrated gravity and the measure of gravity in flight – I’m still trying to work out how to deal with that.

So a good start, put possibly needing some PID tuning, and also, if possible some reduction of dlpf from the current 5Hz to 20 or 40Hz as hopefully the new filter is doing its job anyway.