The lady’s not for turning

Around here in the South Cotswold, there are lakes, hundreds of them left behind once the Cotswold stone rocks and gravel have been mined from the ground.  People swim, yacht, canoe, windsurf and powerboat race around the area.  It’d be cool for a piDrone to fly from one side of a lake to the other, tracking the terrain as shown in this gravel pit just 2 minutes walk from my house.  ‘H’ start at the left, move over the pond, climb up and over the gravel and land on the far side:

Surface gravel mining

Surface gravel mining

But there’s a significant problem: neither the ground facing video nor LiDAR work over water.  For the down-facing video, there’s no contrast over the water surface for it to track horizontal movement.  For the LiDAR, the problem come when moving: the piDrone leans to move, and the laser beam doesn’t reflect back to the receiver and height reading stops working.

But there is a solution already in hand that I suspect is easy to implement and has little code performance impact, but amazing impact over the water survival: GPS is currently used in the autopilot process to compare where she is currently located compared to the target location, and pass the speed and direction through to the motion process; it would be nigh on trivial to also pass the horizontal distance and altitude difference since takeoff through to the motion process too.

These fuse with the existing ed*_input code values thus:

  • Horizontally, the GPS fuses always with the down facing PiCamera such that if / when the ground surfaces doesn’t have enough contrast (or she’s travelling too fast for video frames to overlap), the GPS will still keep things moving in the right direction and speed.
  • Vertically is more subtle; as mentioned above, the LiDAR fails when the ground surfaces doesn’t bounce the laser back to the receiver perhaps due to a surface reflection problem or simply because her maximum height of 40m has been exceeded.  In both cases, the LiDAR returns 1cm as the height to report the problem.  Here’s where GPS kicks in, reporting the current altitude since takeoff until the LiDAR starts getting readings again.

Like I’ve said, it’s only a few lines of relatively simple code.  The problem is whether I have the puppies’ plums to try it out over the local lakes?  I am highly tempted, as it’s a lot more real than the object avoidance code for which there will never be a suitable maze.  I think my mind is changing direction rapidly.

Refinement time.

Sorry it’s been quiet; the weather’s been awful, so no videos to show.  Instead, I’ve been tinkering to ensure the code is as good as it can be prior to moving on to object avoidance / maze tracking.

Zoe is back to life once more to help with the “face the direction you’re flying” tweak testing as these don’t quite work yet.  She’s back as my first few attempts with ‘H’ kept breaking props.  First job for ‘Z’ was to have her run the same code as Hermione but with the autopilot moved back to inline to reduce the number of processes as possible for her single CPU Pi0W in comparison with Hermione’s 4 CPU 3B.

Then

  • I’ve started running the code with ‘sudo python -O ./qc.py’ to enable optimisation.  This disable assertion checking, and hopefully other stuff for better performance.
  • I’ve tweaked the Butterworth parameters to track gravity changes faster as Zoe’s IMU is exposed to the cold winds and her accelerometer values rise rapidly.
  • I’ve refining the Garmin LiDAR-Lite V3 to cope with occasional ‘no reading’ triggered caused by no laser reflection detected; this does happen occasionally (and correctly) if she’s tilted and the surface bouncing the laser points the wrong way.
  • I’d also hoped to add a “data ready interrupt” to the LiDAR to reduce the number of I2C requests made; however, the interrupts still don’t happens despite trying all 12 config options. I think the problem is Garmin’s so I’m awaiting a response from them on whether this flaw is fixed in a new model to be launched in the next few weeks .  In the meantime, I only call the GLL I2C when there’s video results which need the GLL vertical height to convert the video pixel movements into horizontal movement in meters.

Having added and tested all the above sequentially, the net result was failure: less bad a failure than previously, but failure nonetheless; the video tracking lagged in order to avoid the IMU FIFO overflowing.  So in the end, I changed Zoe’s video resolution to 240 pixels² @ 10 fps (‘H’ is at 320 pixel² @ 10 fps, and she now can hover on the grass which means I can get on with the “face where you’re going” code.

I do think all the other changes listed are valid and useful, and as a result, I’ve updated the code on GitHub.

In passing, I had also been investigating whether the magnetometer could be used to back up pitch, roll and yaw angles long term, but that’s an abject failure; with the props on idle prior to takeoff, it works fine giving the orientation to feed to the GPS tracking process, but once airborne, the magnetometer values shift by ±40° and varies depending which way she’s going while facing in the same direction.

5 years later…

It was around Christmas 2012 that I started investigating an RPi drone, and the first post was at the end of January ’13.

5 years later, phase ‘one’ is all but done, barring all but the first as minor, mostly optional extras:

  • Track down the GPS tracking instability – best guess is reduced LiPo power as the flight progress in near zero temperature conditions.
  • Get Zoe working again – she’s been unused for a while – and perhaps, if possible, add GPS support although this may not be possible because she’s just a single CPU Pi0W
  • Fuse the magnometer / gyrometer 3D readings to long term angle stability, particular yaw which has no backup long term sensor beyond the gyro.
  • Add a level of yaw control such that ‘H’ always points the way she’s flying – currently she always points in the same direction she took off at.  I’ve tried this several times, and it’s always had a problem I couldn’t solve.  Third time lucky.
  • Upgrade the operating systems to Raspbian Stretch with corresponding requirements for the I2C fix and network WAP / udhcpd / dnsmasq which currently means the OS is stuck with Jessie from the end of February 2017.
  • Upgrade camera + lidar 10Hz sampling versus camera 320² pixels versus IMU 500Hz sampling to 20Hz, 480² pixels, 1kHz respectively.  However, every previous attempt to update one leads to the scheduling no longer able to process the others – I suspect I’ll need to wait for the Raspberry Pi B 4 or 5 for the increased performance.

Looking into the future…

  • Implement (ironically named) SLAM  object mapping and avoidance with Sweep, ultimately aimed at maze nativation – just two problems here: no mazes wide enough for ‘H’ clearance, and the AI required to remember and react to explore only unexplored areas in the search for the center.
  • Fuse GPS latitude, longitude and altitude / down-facing LiDAR + video / ΣΣ acceleration δt δt fusion for vertical + horizontal distance – this requires further connections between the various processes such that GPS talks to motion process which does the fusion.  It enables higher altitude flights where the LiDAR / Video can’t ‘see’ the ground – there are subtleties here swapping between GPS and Video / LiDAR depending whose working best at a given height above the ground based on an some fuzzy logic.
  • Use down-facing camera for height and yaw as well as lateral motion – this is more a proof of concept, restrained by the need for much higher resolution videos which current aren’t possible with the RPi B3.
  • Find a cold-fusion nuclear battery bank for flight from the Cotswolds, UK to Paris, France landing in Madrid, Spain or Barcelona, Catalonia!

These future aspirations are dreams unlike to become reality either to power supply, CPU performance or WiFi reach.  Although a solution to the WiFi range may be solvable now, the other need future technology, at least one of which my not be available within my lifetime :-).

Wishing you all a happy New Year and a great 2018!

Speedy Gonzales

Currently, all speeds, both horizontally and vertically are set to 0.3m/s for the sake of safety in enclosed arenas like indoors and the walled back garden.  The down side is that in the park with the GPS waypoints perhaps 20m apart, it takes a very long time, often over a minute between waypoints, wearing out the batteries in a few flights.

The limitation other than safety is to ensure the down-facing video can track the difference between frames, which means there needs to be a significant overlap between consecutive frames.

The video runs at 10Hz*.  The RPi camera angle of view (AOV) is 48.8°.  With the camera 1m off the ground (the standard set throughout all flights)**, 48.8° corresponds to 80cm horizontal distance (2 x 1m * tan (AOV / 2)). Assuming there needs to be a 90% overlap between frames to get accurate video macro-block vectors, every 0.1s, Hermione can move up to 8cm (10%) or 0.80m/s compared to the current 0.3m/s.  I’ll be trying this out on the GPS tracking flights in the park tomorrow.


*10Hz seems to be about the highest frequency for the video that the macro-block processing can handle without causing other sensor processing to overflow – specifically the IMU FIFO.

**1 meter height is for the sake of safety, and because the video 320² pixels macro-blocks can resolve distance accurately on grass and gravel.  Doubling the height requires quadrupling the video frame size to 640² to get the same resolution required for grass / gravel, and once again, the processing time required will cause IMU FIFO overflowing.


P.S. The weather isn’t as good as I’d hoped to do the GPS tracking flights in the park yet, but I did take Hermione into the back garden this morning to test her increased horizontal velocity changes; she happily ran at 1m/s over the grass, so that will be the new speed used for the much longer distance GPS flights to reduce Hermione’s flight time and hence her and the DJI Mavic’s battery drain.

Video Distance + Compass Direction ≈ GPS

Distance + Direction = GPS

Distance + Direction = GPS

By human measurements, the distance was about 7m at about 45° (i.e NE).  GPS says 8.6m, video camera tracking says 5 which is the flight plan defined length to travel.

It was never going to be perfect due to the difference between magnetic and true north, the resolution of GPS of around 1m, and how video distance tracking will always be a best guess, but it’s more than good enough for my cunning plan to work.

However, the plan’s taking a premature diversion; during this test, I was less careful and she ended up (in vertical descent mode) clipping 5 props against the drive stone wall.  Next step (after replacing the props!) is now to deploy my Scanse Sweep code which will trigger an orderly landing if any object is detected less than 1.5m away – Hermione’s radius is 50cm prop tip to tip diagonally so that’s 1m clearance.

One interesting point: the compass readings are mostly in a very dense cluster, with just a few (relatively) pointing in very different directions – that’s as Hermione passed the family car!

A difference of opinion.

By lowering the video frame rate to 10Hz, I’ve been able to increase the video resolution to 720² pixels.  In addition I’ve increased the contrast on the video to 100%.  Together these now provide enough detail to track lateral motion on the lawn.  Drift with hover is non-existent, so next step was to try a flight around a 2m square.  That’s where the disagreement showed itself:

Difference of opinion

Difference of opinion

  • Top left is the flight plan up to the point I killed the flight: 2 meters forwards and left by 0.35m
  • Top right shows the 90° anticlockwise yaw so she points the way she’s going
  • Bottom left is the track picked up by the PiCamera macro-blocks
  • Bottom right is the track derived by double integrating the accelerometer.

Both agree on the forward motion of about 2 meters, but the disagreement arises at the point she turns left.  The right of the pair is correct based on my independent third-party view of the flight; although she was pointing left, she flew right from my point of view i.e. backwards from her point of view.  I’ve clearly got the maths back-to-front in the lateral motion tracking.  I’m pretty sure of the offending line of code, and the fix is trivial, but I’m really struggling to convince myself why what’s there is wrong.

Luckily, during the flights, there were a number of high-torque landings which ultimately broke the bracket for one of Hermione’s legs.  Until the replacement arrives from Poland, I have plenty of time to kill convincing myself why the existing code is wrong.

Buttercups and daisies…

are lacking yet this spring, and having mown the lawn yesterday, features are hard to find for the video lateral tracking.  So I think this is a pretty good 37s hover.  In fact, I think it’s as good as it can be until the daisies start sprouting:

This is with a frame size of 640² pixels.  There’s an check in the code which reports whether the code keeps up with the video frame rate.  At 640² it does; I tried 800² and 720² but the code failed to keep up with the video frame rate of 20fps.

As a result, I’ve uploaded the changes to GitHub.  There’s work-in-progress code there for calibrating the compass “calibrateCompass()”, although that’s turning out to be a right PITA.  I’ll explain more another time.

As a side note, my Mavic uses two forward facing camera to stereoscopically track horizontal movement, combined with GPS and a corresponding ground facing pair of cameras and the IMU accelerometer integration, yet if you watch the frisbee / baseball bat to the left, even the Mavic drifts.

Chicken poo tracking

If you look at yesterday’s video full screen, from top left to right, you can see a muddy patch and two chicken poos, the second poo of which is close to Hermione’s front left prop on take-off.  I was back out in the dark last night, tracking them down.  Here’s why:

Lateral tracking

Lateral tracking

Although the graph of camera lateral tracking and the Mavic video are almost facsimiles in direction, the scale is out; the graph shows the distance from take-off to landing to be about 1.7m whereas a tape measure from chicken poo #2 to the cotoneaster shrubbery landing point measures about 4.2m.  Given how accurate the direction is, I don’t think there’s any improvement needed for the macro-block processing – simply a scale factor change of ≈ 2.5.  I wish I knew more about the video compression method for generating macro-blocks to understand what this 2.5 represents – I don’t like the idea of adding an arbitrary scale of 2.5.

One further point from yesterday’s video, you can see she yaws clockwise by a few degrees on takeoff – Hermione’s always done this, and I think the problem is with her yaw rate PID needing more P and less I gain.  Something else for me to try next.


I had tried Zoe first as she’s more indestructible. However, her Pi0W can only cope with 400 x 400 pixels video, whereas Hermione’s Pi B 2+ can cope with 680 x 680 pixel videos  (and perhaps higher with the 5 phase motion processing) which seem to work well with the chicken trashed lawn.

That’s better…

not perfect, but dramatically better.  The flight plan was:

  • take off to 1m
  • move left over 6s at 0.25m/s while simultaneously rotating ACW 90° to point in the direction of travel
  • land.

I’d spent yesterday’s wet weather rewriting the macro-block processing code, breaking it up into 5 phases:

  1. Upload the macro block vectors into a list
  2. For each macro-block vector in the list, undo yaw that had happened between this frame and the previous one
  3. Fill up a dictionary indexed with the un-yawed macro-block vectors
  4. Scan the directory, identifying clusters of vectors and assigned scores, building a list of highest scoring vector clusters
  5. Average the few, highest scoring clusters, redo the yaw of the result from step 2, and return the resultant vector

Although this is quite a lot more processing, splitting it into five phases compared to yesterday’s code’s two means that between each phase, the IMU FIFO can be checked, and processed if it’s filling up thus avoiding a FIFO overflow.

Two remaining more subtle problems remain:

  1. She should have stayed in frame
  2. She didn’t quite rotate the 90° to head left.

Nonetheless, I once more have a beaming success smile.

Crash back down to earth.

Normality has returned.  I took Hermione out to test what video resolution she could process.  It turns out 640 x 640 pixels (40 x 40 macro-blocks) was yesterday’s video frame size.  800 x 800 pixels (50 x 50 macro-blocks) this morning was a lot less stable, and I think the 960 x 960 pixels (60 x 60 macro-blocks) explains why.

At 960 x 960, Hermione leapt up into the sky; the height breach protection killed the props at 1.5m but by then she was accelerating so hard she probably climbed to 3m before dropping back down like a brick onto the hard stone drive upside-down.  Luckily only 2 props got trashed as that’s an exact match for the spares I had left.

Rocketing into the sky appears to be Hermione’s symptom of a IMU FIFO overflow.  For some reason, Hermione doesn’t catch the FIFO overflow interrupt so she just carries on, but now with gravity reading much less than zero because the FIFO has shifted so in fact she’s reading gyro readings, and so had to accelerate hard to compromise.  The shift happens because the FIFO is 512 bytes and I’m filling it with 12 byte batches; 512 % 12 != 0.

How does this explain the 800 x 800 wobbles?  My best guess is that these 2500 macro-blocks (800² / 16²) are processed just fast enough to avoid the FIFO overflow shown by 900², but does have a big impact on the scheduling such than instead of the desired 100Hz updates to the motors, it’s a lot nearer the limit of 23Hz imposed by the code.  That means less frequent, larger changes.

So that mean I need to find the balance between video frame size and IMU sampling rate filling the FIFO to get the best of both.  Luckily with indestructible Zoe imminently back in the running, I can test with her instead.