10°C and all’s fine

I think the problem with stability was a combination of Hermione’s new long legs’ weight distribution, combined with (perhaps) cold LiPo power.  Putting her back to the body layout above, combined with running the LiPo over a 33Ω resistor (0.5A at 4s cells ≈ 16V = 8W when she’s not plugged in) and normal behaviour has resumed.  Due to some diagnostics tweaks, the code is updated on GitHub.

SLAM dunk

Once last brain dump before next week’s torture in Disneyland Paris: no, not crashing into inanimate objects; quite the opposite: Simultaneous Location And Mapping i.e. how to map obstacles’ location in space, attempting to avoid them initially through random choice of change in direction, mapping both the object location and the trial-and-error avoidance and in doing so, feeding backing into future less-randomized, more-informed direction changes i.e. a.i.

My plan here, as always, ignores everything described about standard SLAM processes elsewhere and does it my way based upon the tools and restrictions I have:

  • SLAM processing is carried out by the autopilot process.
  • GPS feeds it at 1Hz as per now.
  • Sweep feeds it every time a near obstacle is spotted within a few meters – perhaps 5?
  • The map is 0.5m x 0.5m resolution python dictionary indexed by integer units of 1,1 (i.e. twice the distance GPS measurement) into whose value is a score (resolution low due to GPS accuracy and Hermione’s physical size of 1m tip to tip)
  • GPS takeoff location = 0,0 on the map
  • During the flight, each GPS position is stored in the map location dictionary with a score of +100 points marking out successfully explored locations
  • Sweep object detection are also added to the dictionary, up to a limited distance of say 5m (to limit feed from Sweep process and ignore blockages too far away to matter).  These have a score of say -1 points due to multiple scans per second, and low res conversion of cm to 0.5m
  • Together these high and low points define clear areas passed through and identified obstructions respectively, with unexplored areas having zero value points in the dictionary.
  • Height and yaw are fixed throughout the flight to local Sweep and GPS orientation in sync.
  • The direction to travel within the map is the highest scoring next area not yet visited as defined by the map.

The above code and processing is very similar to the existing code processing the down facing video macro-blocks to guess the most likely direction moved; as such, it shouldn’t be too hard to prototype.  Initially the map is just dumped to file for viewing the plausibility of this method in an Excel 3D spreadsheet.


P.S. For the record, despite autonomous GPS testing being very limited, because the file-based flight plan works as well or better than the previous version, I’ve unloaded the latest code to GitHub.

2m square

While waiting for half-decent weather, I’ve been tinkering and tweaking to make sure everything is as good as possible before moving on to the GPS tracking code. This is the result: a 2m square.

As a result, I’ve updated the code on GitHub.

“To mow, or not to mow, that is the question:

Whether ’tis easier for the macro-blocks to track
The clovers and daisies of contrasting colour..”

The answer is no, I shouldn’t have mown the lawn.  With the kids’ toys moved out of the way, and any ungreen contrasting features slain, there was nothing to distinguish one blade of shorn grass from another and Hermione drifted wildly.  Reinstating the kids’ chaos restored high quality tracking over 5m.

The point of the flight was to compare GPS versus macro-block lateral tracking.  Certainly over this 5m flight, down-facing video beat GPS hands down:

Lateral tracking

Lateral tracking

My best guess interpretation of the GPS graph is that the flight was actually from the 2 – 7m diagonally heading north west.  The camera POV doesn’t include compass data, so it’s correctly showing her flying forwards by 5m.  The compass code is not working accurately yet – it needs more investigation why not – it was showing ~90° (i.e. East) rather than the true 45° (i.e. North East) shown by the GPS and a handheld compass.

I’ve done some more refinements to scheduling the sensor reads, and also accuracy of GPS data streamed from the GPS process.  It’s worth viewing this graph full screen – each spike shows the time in seconds between motion processing loops – i.e. the time spent processing other sensors – 10ms indicates just IMU data was processed.  The fact no loop takes more than 0.042s* even with full diagnostics running means I could up the sampling rate back to 1kHz – it’s at 500Hz at the moment.  More importantly, it shows processing is nicely spread out and each sensor is getting it’s fair share of the processing and nobody is hogging the limelight.

Motion processing

Motion processing

As a result, I’ve updated the code on GitHub.


*42ms is the point where the IMU FIFO overflows at 1kHz sampling – 512 FIFO size / 12 bytes sample size / 1kHz sampling rate

A busy week…

First, the result: autonomous10m linear flight forwards:

You can see her stabilitydegrade as she leaves the contrasting shadow area cast by the tree branches in the sunshine.  At the point chaos broke loose, she believed she had reached her 10m target and thus she was descending; she’s not far wrong – the start and end points are the two round stones placed a measured 10m apart to within a few centimetres.

So here’s what’s changed in the last week:

  • I’ve added a heatsink to my B3 and changed the base layer of my Pimoroni Tangerine case as the newer one has extra vents underneath to allow better airflow.  The reason here is an attempt to keep a stable temperature, partly to stop the CPU over heating and slowing down, but mostly to avoid the IMU drift over temperature.  Note that normally the Pi3 has a shelf above carrying the large LiPo and acting as a sun-shield – you can just about see the poles that support it at the corners of the Pi3 Tangerine case.

    Hermione's brain exposed

    Hermione’s brain exposed

  • I’ve moved the down-facing Raspberry Pi V2.1 camera and Garmin LiDAR-Lite sensor to allow space for the Scanse Sweep sensor which should arrive in the next week or two courtesy of Kickstarter funding.
    Hermione's delicate underbelly

    Hermione’s delicate underbelly

    The black disk in the middle will seat the Scanse Sweep perfectly; it lifts it above the Garmin LiDAR Lite V3 so their lasers don’t interfere with each other; finally the camera has been moved far away from both to make sure they don’t feature in its video.

  • I’ve changed the fusion code so vertical and lateral values are fused independently; this is because if the camera was struggling to spot motion in dim light, then the LiDAR height was not fused and on some flights Hermione would climb during hover.
  • I’ve sorted out the compass calibration code so Hermione knows which way she’s pointing.  The code just logs the output currently, but soon it will be both fusing with the gyrometer yaw rate, and interacting with the below….
  • I’ve added a new process tracking GPS position and feeding the results over a shared-memory OS FIFO file in the same way the video macro-block are passed across now. The reason both are in their own process is each block reading the sensors – one second for GPS and 0.1 second for video – and that degree of blocking must be completely isolated from the core motion processing.  As with the compass, the GPS data is just logged currently but soon the GPS will be used to set the end-point of a flight, and then, when launched from somewhere away from the target end-point, the combination of compass and GPS together will provide sensor inputs to ensure the flight heads in the right direction, and recognises when it’s reached its goal.

As a result of all the above, I’ve updated GitHub.

Cloudy conditions

The sun wasn’t shining brightly, so no high contrast on the lawn; the lawn had been mowed, removing the contrasting grass clumps too. Yet she still did a great attempt at a 1m square. I think this is about as good as she can get – it’s time for me to move on to adding compass, GPS and object avoidance.  The code as been updated on GitHub.

Buttercups and daisies…

are lacking yet this spring, and having mown the lawn yesterday, features are hard to find for the video lateral tracking.  So I think this is a pretty good 37s hover.  In fact, I think it’s as good as it can be until the daisies start sprouting:

This is with a frame size of 640² pixels.  There’s an check in the code which reports whether the code keeps up with the video frame rate.  At 640² it does; I tried 800² and 720² but the code failed to keep up with the video frame rate of 20fps.

As a result, I’ve uploaded the changes to GitHub.  There’s work-in-progress code there for calibrating the compass “calibrateCompass()”, although that’s turning out to be a right PITA.  I’ll explain more another time.

As a side note, my Mavic uses two forward facing camera to stereoscopically track horizontal movement, combined with GPS and a corresponding ground facing pair of cameras and the IMU accelerometer integration, yet if you watch the frisbee / baseball bat to the left, even the Mavic drifts.

Latest code on GitHub

Up to now, I’ve deferred adding code to GitHub until a particular development phase is over.  However, so much has changed in the past few months, I ought to share.  Features:

  • X8 optional configuration
  • more precise and efficient scheduling using select() allowing for extra sensors…
  • LEDDAR – fully tested
  • PX4FLOW – tested to the extent the quality of the PX4FLOW allows
  • ViDAR – testing in progress
  • Garmin LIDAR-Lite V3 – arrival imminent
  • Compass – tested but unused except for logging
  • Fusion – tested, but each sensors source requires different fusion paramters

The compass function is unused except for logging.  The ViDAR and Fusion features require at least a height sensor and further calibration.  Therefore, I strongly recommend setting

self.camera_installed = False

unless you want to see how well it isn’t working yet.

Your can enable logging for the ViDAR stats without including them in the Fusion by setting the above to True and also setting these two variables to False:

 #-------------------------------------------------------------------------------
 # Set the flags for horizontal distance and velocity fusion
 #-------------------------------------------------------------------------------
 hdf = True
 hvf = True

This code comes with absolutely no warranty whatsoever – even less than it normally does.  Caveat utilitor.

Left (a)drift

I flew Phoebe earlier with LEDDAR working well.  She repeatedly drifted left, and self-corrected as can be seen in the graph below.  You can see her repeatedly drifting and stopping.  This is the expected behaviour due to the top level PID controlling velocity not distance: drift is stopped, but not reversed.  On that basis, I’ve updated the code on GitHub.

Left (a)drift

Left (a)drift

To get her back to where she took off from, I need another set of ‘pseudo-PIDs’ to recognise and correct the distance drifted.  I’m going to keep this as simple as possible:

  • I’ll continue to use my velocity flight plan – integrating this over time will provide the ‘target’ for the distance PIDs in the earth reference frame
  •  I’ll integrate the velocity (rotated back to earth frame) over time to get the distance PID ‘input’ – although this is double integration of the accelerometer, it should be good enough short-term based upon the close match between the graph and the flight I watched.
  • critically, I’ll be using fixed velocity output from the distance ‘pseudo PID’, rotated back to the quad-frame as the inputs to the existing velocity PIDs – the input and target of the distance ‘pseudo PID’ only provide the direction, but not speed of the correction.

This should be a relatively simple change which will have a visible effect on killing hover drift, or allowing intentional horizontal movement in the flight plan.

After that, I’ll add the compass for yaw control so Phoebe is always facing the way she’s going, but that’s for another day.

Code update

I’ve updated the code on GitHub.  The only significant change is the Raspberry Pi camera works correctly with the restructured code.

As far as flight quality is concerned, it’s better than earlier versions prior to when I added the class code and switched to the IMU FIFO, but it’s a long way to go before it’s good enough.

I don’t think there’s much else I can do without adding the laser sensors, and that means it’s going to continue to be quiet here for a while.