On a cold winter’s morning…


Both flights use identical code.  There are two tweaks compared to the previous videos:

  1. I’ve reduced the gyro rate PID P gain from 25 to 20 which has hugely increased the stability
  2. Zoe is using my refined algorithm for picking out the peaks in the macro-block frames – I think this is working better but there’s one further refinement I can make which should make it better yet.

I’d have liked to show Hermione doing the same, but for some reason she’s getting FIFO overflows.  My best guess is that her A+ overclocked to turbo (1GHz CPU) isn’t as fast as a Zero’s default setting of 1GHz – no idea why.  My first attempt on this has been improved scheduling by splitting the macro-block vectors processing into two phases:

  1. build up the dictionary of the set of macro-blocks
  2. processing the dictionary to identify the peaks.

Zoe does this in one fell swoop; Hermione schedules each independently, checking in between that the FIFO hasn’t filled up to a significant level, and if it has, deal with that first.  This isn’t quite working yet in passive test, even on Zoe, and I can’t find out why!  More anon.

30s with, <10s without

A few test runs.  In summary, with the LiDAR and Camera fused with the IMU, Zoe stays over her play mat at a controlled height for the length of the 30s flight.  Without the fusion, she lasted just a few seconds before she drifted off the mat, lost her height, or headed to me with menace (kill ensued).  I think that’s pretty conclusive code fusion works!

With Fusion:

Without Fusion:

Better macro-block interpretation

Currently, getting lateral motion from a frame full of macro-blocks is very simplistic:  find the average SAD value for a frame, and then only included those vectors whose SAD is lower.

I’m quite surprised this works as well as it does but I’m fairly sure it can be improved.  There are four factors to the content of a frame of macro-blocks.

  • yaw change: all macro-block vectors will circle around the centre of the frame
  • height change: all macro-blocks vectors will point towards or away from the centre of the frame.
  • lateral motion change: all macro-blocks vectors are pointing in the same direction in the frame.
  • noise: the whole purpose of macro-blocks is simply to find the best matching blocks between two frame; doing this with a chess set (for example) could well have any block from the first frame matching any one of the 50% of the second frame.

Given a frame of macro-blocks, yaw increment between frames can found from the gyro, and thus be removed easily.

The same goes for height too derived from LiDAR.

That leaves either noise or a lateral vector.  By then averaging these values out, we can pick the vectors that are similar to the distance / direction of the average vector. SAD doesn’t come into the matter.

This won’t be my first step however: that’s to work out why the height of the flight wasn’t anything like as stable as I’d been expecting.

Strutting her stuff

Finally, fusion worth showing.

Yes, height’s a bit variable as she doesn’t accurate height readings below about 20cm.

Yes, it’s a bit jiggery because the scale of the IMU and other sensors aren’t quite in sync.

But fundamentally, it works – nigh on zero drift for 20s.  With just the IMU, I couldn’t get this minimal level of drift for more than a few seconds.

Next steps: take her out for a longer, higher flight to really prove how well this is working.

Live Cold Fusion Test

I took Zoe outside (temperature 0°C – freezing point) to fly this morning with fusion enabled to see what the effect was – fusion was disabled in the flights; they was purely for compare and contrast of the two independent sensor sources.

Fusion was a complementary filter with the -3dB crossover set to 1s.

In all graphs,

  • blue comes from the Garmin LiDAR (Z axis) or Camera (X and Y axes)
  • orange comes from the accelerometer with necessary subtraction of gravity and integration.
  • grey is the fused value – orange works short term with longer term fusion with blue.
Fusion

Fusion

In general, the shapes match closely, but there’s some oddities I need to understand better:

  • horizontal velocity from the Camera is very spiky – the average is right, and the fusion is hiding the spikes well – I’m assuming the problem is my code coping with the change of tilt of the camera compared to the ground.
  •   the vertical height is wrong – at 3 seconds, Zoe should be at 90cm and leveling out.

I need to continue dissecting these stats – more anon.

Fusion thoughts

Here’s my approach to handling the tilt for downward facing vertical and horizontal motion tracking.

Vertical compenstation

Vertical compensation

This is what I’d already worked out to take into account any tilt of the quadcopter frame and therefore LEDDAR sensor readings.

With this in place, this is the equivalent processing for the video motion tracking:

Horizontal compensation

Horizontal compensation

The results of both are in the earth frame, as are the flight plan targets, so I think I’ll swap to earth frame processing until the last step of processing.

One problem as I’m now pushing the limit of the code keeping up with the sensors: with diagnostics on and 1kHz IMU sampling, the IMU FIFO overflows as the code can’t keep up.  This is with Zoe (1GHz CPU speed) and without LEDDAR.

LEDDAR has already forced me to drop the IMU sample rate to 500Hz on Chloe; I really hope this is enough to also allow LEDDAR, macro-block processing and diagnostics to work without FIFO overflow.  I really don’t want to drop to the next level of 333Hz if I can help it.

Coding is in process already.

Complementary sensors

LEDDAR One provides long-term distance and, by differentiation, long term Z-axis velocity vectors in the quad-frame

Scanse Sweep provides primarily 360° range finding; with lots of complex mapping between boundary frames – a frame comes from a join the dot plot of the flight zone – it should be possible to get quad-frame distance / direction / velocity movement vectors too.  However…

PX4FLOW provides horizontal velocities and height over I2C meaning easy software integration and fusing with the current inputs for the velocity PIDs; although this is contrary to my DIY desire, it will fill the gap between now and the arrival of Scance Sweep.

There is a balancing act however:

  • Both LEDDAR and Scanse Sweep have longer ranges.
  • Scanse Sweep is the only sensor providing object detection and avoidance – and hence the possibility of fulfilling one of my dreams for my drone to wander through a maze.
  • PX4FLOW is open hardware / source – a chinese equivalent is what Reik Hua uses as a result.  Using PX4FLOW would make LEDDAR redundant temporarily, but once I have Scance Sweep, PX4FLOWs open-source software may well provide guidance on how to convert Scance Sweep boundary distances into velocities, thus making PX4FLOW redundant, and swapping back to LEDDAR.  The fact Scance Sweep is tracking an outline of the flight area rather that sequential photographs like PX4FLOW may mean I can do the processing in python which again would keep the processing within my DIY desire.
  • PX4FLOW will work outside in an open area as it’s using a down-facing Z-axis camera to work out X- and Y axis velocities, whereas Scanse Sweep needs an area with boundaries – this makes outdoor testing of PX4FLOW possible which is always a necessity for initial testing of new sensors.

A fusion of all three would make a quad capable of flying anywhere at a couple of meters height.  That would be quite an achievement!

So I think I’ll be getting a PX4FLOW.  Annoyingly, PX4FLOW has gone out of stock just after I started blogging about it.  Is that evidence that people actually do read this blog?

Fusion flaw

I collected some diagnostics to start investigating the wobbles shown in the previous post’s video:

Fused velocities

Fused velocities

It’s fairly clear the fusion bias is still strongly towards the accelerometer.  In a way I’m glad as it makes my next steps very clear.

Caught a cold

I flew my new experimental fusion filter several times this morning; Phoebe barely got off the ground.  After several flights reconfiguring the GERMS (Gravity Error Root Mean Square) filter to no avail, I just ran the normally successful code, and that too failed to get off the ground in exactly the same way.  So it’s battery power at cold ambient temperature rather than the GERMS filter.  Phew!

So I now have a couple of 4S GensAce 2300mAh batteries on order; the higher voltage (compared to the 3S version) means that even with the higher internal impedance at lower temperatures, there should still be enough power for the motors; at the same time, the low temperature higher impedance should help to warm the cells and so keep the impedance low enough.  It’ll probably be Tuesday before the batteries arrive, and so Wednesday before GERMS testing can resume.  Weather forecast is looking good for the week ahead.  Onwards and upwards, hopefully!

Akin to Kalman

I had another quick go at understanding Kalman filters yesterday, and mostly failed miserably with one exception.  I think what I current have is working in a similar way to a kalman filter:

 #-------------------------------------------------------------------------------------------
 # Collect the latest batch of data from the sensors, and sort out units and calibration
 #-------------------------------------------------------------------------------------------
 qax, qay, qaz, qrx, qry, qrz, i_time, temp = sampling.collect()
 qax, qay, qaz, qrx, qry, qrz = mpu6050.scaleSensors(qax,
                                                     qay,
                                                     qaz,
                                                     qrx,
                                                     qry,
                                                     qrz)


 #------------------------------------------------------------------------------------------
 # Track the number of motion loops and sampling loops; any discrepancy between these are the
 # missed samples or sampling errors.
 #------------------------------------------------------------------------------------------
 motion_loops += 1
 sampling_loops += i_time * dri_frequency

 #-------------------------------------------------------------------------------------------
 # Angular predication: Now we know the time since the last batch of samples, update the
 # previous angles with the 'integral' of the previous euler rotation rates.
 #-------------------------------------------------------------------------------------------
 urp, urr, ury = Body2EulerRates(qry, qrx, qrz, pa, ra)
 pa += urp * i_time
 ra += urr * i_time
 ya += ury * i_time

 #-------------------------------------------------------------------------------------------
 # Based upon the predicted angles, rotate the new accelerometer data to earth frame.
 # Next, run the earth frame acceleration through the Butterworth LPF to extract gravity.
 # Finally, rotate revised gravity back to the quad frame.
 # This provides the best guestimation of current gravity orientation and value.
 #-------------------------------------------------------------------------------------------
 eax, eay, eaz = RotateQ2E(qax, qay, qaz, pa, ra, ya)

 egx = bfx.filter(eax)
 egy = bfy.filter(eay)
 egz = bfz.filter(eaz)

 qgx, qgy, qgz = RotateE2Q(egx, egy, egz, pa, ra, ya)

 #-------------------------------------------------------------------------------------------
 # To stop integration drift from the gyros, merge the short-term noise free predicted angles
 # with the long-term accurate acclerometer which short term are tainted with acceleration as
 # well as gravity. tau is the period during which we expect acceleration to average out leading
 # only the net gravity value. tau can be small if acceleration is short and sharp. There is
 # a balance here between the period to trust the integrated, reorientated gyro readings and
 # the period where accelerometer spikes average out.
 #-------------------------------------------------------------------------------------------
 upa, ura = GetRotationAngles(qax, qay, qaz)
 tau_fraction = tau / (tau + i_time)
 pa = tau_fraction * pa + (1 - tau_fraction) * upa
 ra = tau_fraction * ra + (1 - tau_fraction) * ura

Breaking this down it:

  • reads the sensors
  • predicts the current pitch, roll yaw based upon the previous iteration’s angles and the gyro changes since
  • extracts gravity from the accelerometer readings using these predicted angles to orientate accelerometer readings to the earth frame and pass the acceleration  through the Butterworth IIR filter
  • fuses accelerometer angles with these latest angles.

I’m not happy with the section in red – there’s a choice here neither of which I like much:

  1. the version shown uses a complementary filter with a fixed value of tau and a time increment to fuse accelerometer angles with the best predicted angles: the problem here is that the time increment doesn’t take into account the sharp acceleration peaks and troughs in the flight plan. Instead it relies on the fact that overall these peaks and troughs will even out but in the short term real acceleration seeps into the measure of gravity and drift ensues.
  2. the alternate version does away with the complementary filter and instead uses the qgx, qgy and qgz to get the latest set of angles from the red call the GetRotationAngles: the problem here is that errors accumulate over time as there’s no feed from real accelerometer readings to curtail the integration errors which also will ultimately lead to drift.

Which leads to option 3: tau is tuned dynamically based upon the deviation from a stable value of accelerometer readings rather than incremental time – something along the lines of this:

 upa, ura = GetRotationAngles(qax, qay, qaz)
 tau_fraction = tau / (tau +  gain * ((qax - qgx)2 + (qay - qgy)2 + (qaz - qgz)2)0.5)
 pa = tau_fraction * pa + (1 - tau_fraction) * upa
 ra = tau_fraction * ra + (1 - tau_fraction) * ura

More thinking and testing needed to actually work out the right way to dynamically tweak tau on the fly.

The similarity with a Kalman filter is that it’s 2 stage, prediction and fusing, where the fuse fraction depends on an estimated accuracy of the value to be fused. Hence the raw acceleration value always gets through but how much gets through depends on how dependable the value is thought to be – i.e. it’s deviation from expectations.


ADDENDUM
For clarity I’ve rewritten the bit of speculative code to now remove tau and instead refer to the fusion_fraction:

 upa, ura = GetRotationAngles(qax, qay, qaz)
 fusion_fraction = 1 / (1 + gain * ((qax - qgx)2 + (qay - qgy)2 + (qaz - qgz)2)0.5)
 pa = (1 - fusion_fraction) * pa + fusion_fraction * upa
 ra = (1 - fusion_fraction) * ra + fusion_fraction * ura

So when there is net acceleration in the system, the RMS error

gain * ((qax - qgx)2 + (qay - qgy)2 + (qaz - qgz)2)0.5

is larger, meaning the fusion fraction is smaller, and a smaller fraction of the raw accelerometer value is used to update the angles. The first step of testing is simply to get an estimate for gain based upon the tau value (0.5 seconds) that’s worked well in the past.