More on mapping

It’s been a frustrating week – despite lovely weather, lots broke, and once each was fixed, something else would break.  To top is all, an update to the latest version of Jessie yesterday locked the RPi as soon as I kicked off a passive flight.  I backed this ‘upgrade’ out as a result.  I now I have everything back and working, confirm by hover and 10m lateral flights this morning, although the latter aborted half-way through with an I2C error.  Underlying it all is power to the GLL and RPi 3B – I was seeing lots of brown-out LED flashes from the B3 and lots of I2C and GLL errors.  I’m consider swapping back to a 2B+ overclocked to 1.2GHz as a result.

In the meantime I have been looking at mapping in more detail as it’s complex and it needs breaking down into easy pieces.  Here’s the general idea:

Polystyrene block layout

Polystyrene block layout

Each polystyrene block is 2.5m long, 1.25m high and 10cm thick.  They are pinned together with screw-in camping tent pegs.  The plan is to

  • fly 10m  at 1m height without the ‘maze’, logging compass and GPS to check the results, in particular to see whether
    • GPS can be gently fused with RPi ground facing motion tracking to enhance lateral motion distance measurements
    • compass can be fused with IMU gyro yaw rate to enforce a better linear flight
  • fly 10m without the ‘maze’ again but with fused compass and GPS (assuming the above is OK)
  • add the ‘maze’ and fly in a straight 10m line from bottom to top again as a sanity check
  • add the Sweep and log it’s contents when doing the same 10m again
  • build the flight map in Excel based upon GPS, compass and sweep logs – the results should look like the map with the addition of what garden clutter lies beyond the end of each exit from the ‘maze’
  • add a new mapping process to do dynamically what has been done in Excel above
  • add object avoidance from Sweep and repeat – this is the hardest bit as it introduces dynamic updates to preconfigured flight plans
  • add ‘maze’ tracking code to reach a target GPS position, nominally the center of the ‘maze’ – this stage requires further thought to break it down further.

A busy week…

First, the result: autonomous10m linear flight forwards:

You can see her stabilitydegrade as she leaves the contrasting shadow area cast by the tree branches in the sunshine.  At the point chaos broke loose, she believed she had reached her 10m target and thus she was descending; she’s not far wrong – the start and end points are the two round stones placed a measured 10m apart to within a few centimetres.

So here’s what’s changed in the last week:

  • I’ve added a heatsink to my B3 and changed the base layer of my Pimoroni Tangerine case as the newer one has extra vents underneath to allow better airflow.  The reason here is an attempt to keep a stable temperature, partly to stop the CPU over heating and slowing down, but mostly to avoid the IMU drift over temperature.  Note that normally the Pi3 has a shelf above carrying the large LiPo and acting as a sun-shield – you can just about see the poles that support it at the corners of the Pi3 Tangerine case.

    Hermione's brain exposed

    Hermione’s brain exposed

  • I’ve moved the down-facing Raspberry Pi V2.1 camera and Garmin LiDAR-Lite sensor to allow space for the Scanse Sweep sensor which should arrive in the next week or two courtesy of Kickstarter funding.
    Hermione's delicate underbelly

    Hermione’s delicate underbelly

    The black disk in the middle will seat the Scanse Sweep perfectly; it lifts it above the Garmin LiDAR Lite V3 so their lasers don’t interfere with each other; finally the camera has been moved far away from both to make sure they don’t feature in its video.

  • I’ve changed the fusion code so vertical and lateral values are fused independently; this is because if the camera was struggling to spot motion in dim light, then the LiDAR height was not fused and on some flights Hermione would climb during hover.
  • I’ve sorted out the compass calibration code so Hermione knows which way she’s pointing.  The code just logs the output currently, but soon it will be both fusing with the gyrometer yaw rate, and interacting with the below….
  • I’ve added a new process tracking GPS position and feeding the results over a shared-memory OS FIFO file in the same way the video macro-block are passed across now. The reason both are in their own process is each block reading the sensors – one second for GPS and 0.1 second for video – and that degree of blocking must be completely isolated from the core motion processing.  As with the compass, the GPS data is just logged currently but soon the GPS will be used to set the end-point of a flight, and then, when launched from somewhere away from the target end-point, the combination of compass and GPS together will provide sensor inputs to ensure the flight heads in the right direction, and recognises when it’s reached its goal.

As a result of all the above, I’ve updated GitHub.

On a cold winter’s morning…


Both flights use identical code.  There are two tweaks compared to the previous videos:

  1. I’ve reduced the gyro rate PID P gain from 25 to 20 which has hugely increased the stability
  2. Zoe is using my refined algorithm for picking out the peaks in the macro-block frames – I think this is working better but there’s one further refinement I can make which should make it better yet.

I’d have liked to show Hermione doing the same, but for some reason she’s getting FIFO overflows.  My best guess is that her A+ overclocked to turbo (1GHz CPU) isn’t as fast as a Zero’s default setting of 1GHz – no idea why.  My first attempt on this has been improved scheduling by splitting the macro-block vectors processing into two phases:

  1. build up the dictionary of the set of macro-blocks
  2. processing the dictionary to identify the peaks.

Zoe does this in one fell swoop; Hermione schedules each independently, checking in between that the FIFO hasn’t filled up to a significant level, and if it has, deal with that first.  This isn’t quite working yet in passive test, even on Zoe, and I can’t find out why!  More anon.

30s with, <10s without

A few test runs.  In summary, with the LiDAR and Camera fused with the IMU, Zoe stays over her play mat at a controlled height for the length of the 30s flight.  Without the fusion, she lasted just a few seconds before she drifted off the mat, lost her height, or headed to me with menace (kill ensued).  I think that’s pretty conclusive code fusion works!

With Fusion:

Without Fusion:

Better macro-block interpretation

Currently, getting lateral motion from a frame full of macro-blocks is very simplistic:  find the average SAD value for a frame, and then only included those vectors whose SAD is lower.

I’m quite surprised this works as well as it does but I’m fairly sure it can be improved.  There are four factors to the content of a frame of macro-blocks.

  • yaw change: all macro-block vectors will circle around the centre of the frame
  • height change: all macro-blocks vectors will point towards or away from the centre of the frame.
  • lateral motion change: all macro-blocks vectors are pointing in the same direction in the frame.
  • noise: the whole purpose of macro-blocks is simply to find the best matching blocks between two frame; doing this with a chess set (for example) could well have any block from the first frame matching any one of the 50% of the second frame.

Given a frame of macro-blocks, yaw increment between frames can found from the gyro, and thus be removed easily.

The same goes for height too derived from LiDAR.

That leaves either noise or a lateral vector.  By then averaging these values out, we can pick the vectors that are similar to the distance / direction of the average vector. SAD doesn’t come into the matter.

This won’t be my first step however: that’s to work out why the height of the flight wasn’t anything like as stable as I’d been expecting.

Strutting her stuff

Finally, fusion worth showing.

Yes, height’s a bit variable as she doesn’t accurate height readings below about 20cm.

Yes, it’s a bit jiggery because the scale of the IMU and other sensors aren’t quite in sync.

But fundamentally, it works – nigh on zero drift for 20s.  With just the IMU, I couldn’t get this minimal level of drift for more than a few seconds.

Next steps: take her out for a longer, higher flight to really prove how well this is working.

Live Cold Fusion Test

I took Zoe outside (temperature 0°C – freezing point) to fly this morning with fusion enabled to see what the effect was – fusion was disabled in the flights; they was purely for compare and contrast of the two independent sensor sources.

Fusion was a complementary filter with the -3dB crossover set to 1s.

In all graphs,

  • blue comes from the Garmin LiDAR (Z axis) or Camera (X and Y axes)
  • orange comes from the accelerometer with necessary subtraction of gravity and integration.
  • grey is the fused value – orange works short term with longer term fusion with blue.
Fusion

Fusion

In general, the shapes match closely, but there’s some oddities I need to understand better:

  • horizontal velocity from the Camera is very spiky – the average is right, and the fusion is hiding the spikes well – I’m assuming the problem is my code coping with the change of tilt of the camera compared to the ground.
  •   the vertical height is wrong – at 3 seconds, Zoe should be at 90cm and leveling out.

I need to continue dissecting these stats – more anon.

Fusion thoughts

Here’s my approach to handling the tilt for downward facing vertical and horizontal motion tracking.

Vertical compenstation

Vertical compensation

This is what I’d already worked out to take into account any tilt of the quadcopter frame and therefore LEDDAR sensor readings.

With this in place, this is the equivalent processing for the video motion tracking:

Horizontal compensation

Horizontal compensation

The results of both are in the earth frame, as are the flight plan targets, so I think I’ll swap to earth frame processing until the last step of processing.

One problem as I’m now pushing the limit of the code keeping up with the sensors: with diagnostics on and 1kHz IMU sampling, the IMU FIFO overflows as the code can’t keep up.  This is with Zoe (1GHz CPU speed) and without LEDDAR.

LEDDAR has already forced me to drop the IMU sample rate to 500Hz on Chloe; I really hope this is enough to also allow LEDDAR, macro-block processing and diagnostics to work without FIFO overflow.  I really don’t want to drop to the next level of 333Hz if I can help it.

Coding is in process already.

Complementary sensors

LEDDAR One provides long-term distance and, by differentiation, long term Z-axis velocity vectors in the quad-frame

Scanse Sweep provides primarily 360° range finding; with lots of complex mapping between boundary frames – a frame comes from a join the dot plot of the flight zone – it should be possible to get quad-frame distance / direction / velocity movement vectors too.  However…

PX4FLOW provides horizontal velocities and height over I2C meaning easy software integration and fusing with the current inputs for the velocity PIDs; although this is contrary to my DIY desire, it will fill the gap between now and the arrival of Scance Sweep.

There is a balancing act however:

  • Both LEDDAR and Scanse Sweep have longer ranges.
  • Scanse Sweep is the only sensor providing object detection and avoidance – and hence the possibility of fulfilling one of my dreams for my drone to wander through a maze.
  • PX4FLOW is open hardware / source – a chinese equivalent is what Reik Hua uses as a result.  Using PX4FLOW would make LEDDAR redundant temporarily, but once I have Scance Sweep, PX4FLOWs open-source software may well provide guidance on how to convert Scance Sweep boundary distances into velocities, thus making PX4FLOW redundant, and swapping back to LEDDAR.  The fact Scance Sweep is tracking an outline of the flight area rather that sequential photographs like PX4FLOW may mean I can do the processing in python which again would keep the processing within my DIY desire.
  • PX4FLOW will work outside in an open area as it’s using a down-facing Z-axis camera to work out X- and Y axis velocities, whereas Scanse Sweep needs an area with boundaries – this makes outdoor testing of PX4FLOW possible which is always a necessity for initial testing of new sensors.

A fusion of all three would make a quad capable of flying anywhere at a couple of meters height.  That would be quite an achievement!

So I think I’ll be getting a PX4FLOW.  Annoyingly, PX4FLOW has gone out of stock just after I started blogging about it.  Is that evidence that people actually do read this blog?

Fusion flaw

I collected some diagnostics to start investigating the wobbles shown in the previous post’s video:

Fused velocities

Fused velocities

It’s fairly clear the fusion bias is still strongly towards the accelerometer.  In a way I’m glad as it makes my next steps very clear.