That’s all, folks…*

I think I’ve done all that can be done with my Raspberry Pi, Python piDrones.  Code is updated on GitHub as a result.  Here’s the vimeo links to the proof-of-the-pudding as it were:

The hardest by far though was the simplest in concept: a stable autonomous hover beyond a few seconds; each of the cool functions listed above probably took a few weeks on average; in contrast, the one-minute-plus hover took years.

There’s lots more videos on Vimeo linked to the blog via pidrone.io/video.

I’ve achieved my personal target and then some: taking a Raspberry Pi and Python to create a piDrone, starting from absolutely ignorance, and doing it my way without using other’s code, ideas or suggestions.

What else could be done?  My only idea is long distance / time flights requiring:

  1. GPS fused with existing distances sensors
  2. Compass fused with existing orientation sensors
  3. Long range wireless connectivity
  4. Nuclear-fusion batteries.

Lack of #4 renders 1-3 pointless.

Also pointless sadly is Penelope; Hermione, my big 3B, is the queen of autonomous control and Zoe, my Pi0W, the queen of human control.  Two’s company, three’s a crowd. The only thing unique I can do with ‘P’ is to get her RPi 3B+ and Stretch O/S completed, and my motivation is lacking; that makes Penelope the queen of spare parts 😥

Time for me to find another hobby to hold back my terminal-boredom-syndrome.  On my bike, me thinks.

So long, and thanks for all the fish!


* …nearly.  I’m doing some refinement for Zoe, primarily so I can take new to the Cotswold Raspberry Jams and anything new and exciting the RPi releases next.

Obstruction avoidance test 2 – PASSED!!!!!

After a minor tweak to the handling resolution of the Scanse Sweep data, all works brilliantly.

This is a five metre forwards flight, with the flight paused and later resumed once the obstacle has been avoided.  Note that ‘H’ tracks the obstruction at about one meter away.  Hence she flies a quarter circle around the circular cardboard tube, before continuing the forward flight when the obstruction is behind her.

The code is updated on GitHub as a result.

“Penelope” or “Lucy”…

…as in the Amazon series “Lucifer”?  I’ll stick with Ms. Pitstop despite the colour scheme; Lucifer never shows up on Tuesdays.

Separated at birth?

Separated at birth?

She’s still pending the new version of the Garmin LIDAR-Lite v3HP – the lower-profile, higher-accuracy version of Hermione and Zoes’ height tracking LiDAR, She’s also waiting for a new PCB so she can have a buzzer, though that’s not holding her back in the same way.  She’ll intentionally not have a Scance Sweep as it’s very very expensive for a non-critical sensor.

My intent had been to make her lower profile, sleek and stealthy to enable longer flights per battery hence the shorter legs, and lower hat and the 13 x 4.4 CF props (compared to ‘H’ 12 x 4.4 Beechwoods). However her hat and feet prevent this – the feet are true lacrosse balls, so heavier than Hermione’s indoor ones, and her salad bowl top also seems heavier.  Overall then ‘H’ weighs in at 4.8kg all installed, and Penelope 4.7kg.  Thus the main benefit is likely she’ll be nippier due to slightly more power from the lighter, larger CF props combined with the raised centre of gravity.  And in fact, this raised CoG and lighter, larger props may well reduce the power needed – we shall see.

In the background, I am working on the “Pond Problem”: fusing GPS distance / direction with the other sensors.  Code’s nigh on complete but I’m yet to convince myself it will work well enough to test it immediately over the local gravel lakes.

Obstruction Avoidance Overview

Here’s what ‘H’ does detecting an obstacles within one meter:

Now it’s time to get her to move around the obstacle and continue to her destination.  All sensors are installed and working; this is pure code in the autopilot process*.

Here’s the general idea for how the code should work for full maze exploration:

  • Takeoff, then…
    • hover
    • record current GPS location in map dictionary, including previous location (i.e index “here I am” : content “here I came from” in units of meters held in a python dictionary
    • do Sweep 360° scan for obstacles
    • find next direction based on the current map contents either…
      • unexplored unobstructed direction (beyond 2m) biased towards the target GPS point (e.g. the centre of the maze)
      • previously visited location marking the current location in the map dictionary as blocked to avoid further return visits
    • head off on the new direction until
      • obstacle found in the new direction
      • unexplored direction (i.e not in the map so far) found
    • repeat

And in fact, this same set of rules are required for just avoiding obstacles, which is good, as I doubt I’ll ever be able to find / build a suitably sized maze, and if I did, the LiPo will run out long before the centre of the maze is reached.


* The fact it’s pure code means it’s going to be quiet on the blogging front apart from GPS tracking videos when the weather warms up.  I’m also considering building a new version of Hermione from the spares I have in stock, provisionally called “Penelope”.  She’s there for shows only; I can then use Hermione purely for testing new features without worrying about breaking her prior to an event.

5 years later…

It was around Christmas 2012 that I started investigating an RPi drone, and the first post was at the end of January ’13.

5 years later, phase ‘one’ is all but done, barring all but the first as minor, mostly optional extras:

  • Track down the GPS tracking instability – best guess is reduced LiPo power as the flight progress in near zero temperature conditions.
  • Get Zoe working again – she’s been unused for a while – and perhaps, if possible, add GPS support although this may not be possible because she’s just a single CPU Pi0W
  • Fuse the magnometer / gyrometer 3D readings to long term angle stability, particular yaw which has no backup long term sensor beyond the gyro.
  • Add a level of yaw control such that ‘H’ always points the way she’s flying – currently she always points in the same direction she took off at.  I’ve tried this several times, and it’s always had a problem I couldn’t solve.  Third time lucky.
  • Upgrade the operating systems to Raspbian Stretch with corresponding requirements for the I2C fix and network WAP / udhcpd / dnsmasq which currently means the OS is stuck with Jessie from the end of February 2017.
  • Upgrade camera + lidar 10Hz sampling versus camera 320² pixels versus IMU 500Hz sampling to 20Hz, 480² pixels, 1kHz respectively.  However, every previous attempt to update one leads to the scheduling no longer able to process the others – I suspect I’ll need to wait for the Raspberry Pi B 4 or 5 for the increased performance.

Looking into the future…

  • Implement (ironically named) SLAM  object mapping and avoidance with Sweep, ultimately aimed at maze nativation – just two problems here: no mazes wide enough for ‘H’ clearance, and the AI required to remember and react to explore only unexplored areas in the search for the center.
  • Fuse GPS latitude, longitude and altitude / down-facing LiDAR + video / ΣΣ acceleration δt δt fusion for vertical + horizontal distance – this requires further connections between the various processes such that GPS talks to motion process which does the fusion.  It enables higher altitude flights where the LiDAR / Video can’t ‘see’ the ground – there are subtleties here swapping between GPS and Video / LiDAR depending whose working best at a given height above the ground based on an some fuzzy logic.
  • Use down-facing camera for height and yaw as well as lateral motion – this is more a proof of concept, restrained by the need for much higher resolution videos which current aren’t possible with the RPi B3.
  • Find a cold-fusion nuclear battery bank for flight from the Cotswolds, UK to Paris, France landing in Madrid, Spain or Barcelona, Catalonia!

These future aspirations are dreams unlike to become reality either to power supply, CPU performance or WiFi reach.  Although a solution to the WiFi range may be solvable now, the other need future technology, at least one of which my not be available within my lifetime :-).

Wishing you all a happy New Year and a great 2018!

Finally worth showing…

First video purely sets some context for the second:

This second is what the post’s about:

So here’s my take on “What was happening”:

The seconds video shows the GPS tracking is essentially working, except for the ‘minor’ fact she completely overran the target landing point, and once again, I ended the flight by encroaching her personal space i.e. the Sweep saw me coming and switched over to an orderly landing.

The problem is, I don’t think the problem’s mine.  Two facts to know before going further: the flight was twenty seconds long and GPS updates happen once per second.  So walking through the various logs from each process involved…

GPS process

GPS process logs

GPS process logs

There’s 18 GPS readings, plus the prerecorded target added to the graph manual by me afterwards.  18 readings is in line with the 20s flight, and the GPS defined distance between take-off and target point is a convincing 2.6m based on what you can see in the video.  What’s wrong is that during the flight, those 18 GPS readings returned only 2 values, shown in blue in the graph; they’re in the correct direction compared to the target which is great, but the distance between them is only about 0.27m.  This then explains everything that was wrong during the flight: because the GPS readings never got to within 1m of the target the flight continued, and because the 2nd point was in the right direction, the flight went in a straight until I got in the way.

Autopilot process

Here’s what the autopilot saw.  All that really matters is there were only 2 distinct GPS reading points, and the autopilot passed those two on to the main motion processing process as distance / direction target:

AP: PHASE CHANGE: RTF
AP: PHASE CHANGE: TAKEOFF
AP: PHASE CHANGE: HOVER
AP: # SATS: ...
AP: PHASE CHANGE: GPS: WHERE AM I?
AP: GPS TRACKING
AP: GPS NEW WAYPOINT
AP: GPS TRACKING UPDATE
AP: PHASE CHANGE: GPS TARGET 3m -151o
AP: GPS TRACKING UPDATE
AP: GPS TRACKING UPDATE
AP: GPS TRACKING UPDATE
AP: GPS TRACKING UPDATE
AP: GPS TRACKING UPDATE
AP: GPS TRACKING UPDATE
AP: PHASE CHANGE: GPS TARGET 2m -150o
AP: GPS TRACKING UPDATE
AP: GPS TRACKING UPDATE
AP: GPS TRACKING UPDATE
AP: GPS TRACKING UPDATE
AP: GPS TRACKING UPDATE
AP: GPS TRACKING UPDATE
AP: PROXIMITY LANDING
AP: PHASE CHANGE: PROXIMITY (0.97m)
AP: LANDING COMPLETE
AP: FINISHED

Motion Process

Motion processing ignores the distance – it just proceeds at a fixed speed in the direction specified.  The flight ends when the GPS process says it is at the target GPS location, so the motion process just keeps moving in the direction defined by autopilot at a fixed speed of 0.3m/s.  The down facing video shows this well.  Note that the -150° yaw specified by the autopilot matches beautifully with the direction flown based on the gyro (where anti/counter clockwise is positive).

Down-facing video

Down-facing video

The flight in reality travelled about 3.7m by the time I got in the way; had she received a GPS point saying she’d overshot, she’d have doubled back, but that never happened.

Why didn’t the GPS receiver not see the movement beyond the first 0.27m?  I’m adamant it ain’t my fault (for a change), and the GPS receiver is the best I’ve found so far when tested passively.  Any ideas anyone?

As a by the by, on the second video, you’ll see both the LiPo (powering the motors) and LiIon (powering the RPi and sensors) now have electronic skiers’ hand / pocket warmers – without these, Hermione struggles to get of the ground, nor read all the sensors now the temperature outside is less than 10°C.

Workable waypoints

With the car off the drive, and selecting waypoints away from significant obstacles, I got this:

Waypoints

Waypoints

Still not perfect – there’s 2.5m meters between the identical start and end points, but it’s good enough.  Waypoint 3 here is probably to blame; I think its distance from waypoint 2 is too far – it’s certainly the most ‘clustered’ point in the test with a tree, a stone wall and overhead telephone wires all within a few meters of it.  The first two waypoints will suffice by starting the flight half way between two and three, so the flight goes south for 4.5m before flying ENE by another 4 meters.  All 3 points (takeoff and the 2 waypoints) have no obstacles, so Sweep can be enabled to land the test flight if a code or GPS waypoint error brings Hermione to within a couple of meters of an obstacle.  That’s what I’ll be trying next.

SLAM dunk

Once last brain dump before next week’s torture in Disneyland Paris: no, not crashing into inanimate objects; quite the opposite: Simultaneous Location And Mapping i.e. how to map obstacles’ location in space, attempting to avoid them initially through random choice of change in direction, mapping both the object location and the trial-and-error avoidance and in doing so, feeding backing into future less-randomized, more-informed direction changes i.e. a.i.

My plan here, as always, ignores everything described about standard SLAM processes elsewhere and does it my way based upon the tools and restrictions I have:

  • SLAM processing is carried out by the autopilot process.
  • GPS feeds it at 1Hz as per now.
  • Sweep feeds it every time a near obstacle is spotted within a few meters – perhaps 5?
  • The map is 0.5m x 0.5m resolution python dictionary indexed by integer units of 1,1 (i.e. twice the distance GPS measurement) into whose value is a score (resolution low due to GPS accuracy and Hermione’s physical size of 1m tip to tip)
  • GPS takeoff location = 0,0 on the map
  • During the flight, each GPS position is stored in the map location dictionary with a score of +100 points marking out successfully explored locations
  • Sweep object detection are also added to the dictionary, up to a limited distance of say 5m (to limit feed from Sweep process and ignore blockages too far away to matter).  These have a score of say -1 points due to multiple scans per second, and low res conversion of cm to 0.5m
  • Together these high and low points define clear areas passed through and identified obstructions respectively, with unexplored areas having zero value points in the dictionary.
  • Height and yaw are fixed throughout the flight to local Sweep and GPS orientation in sync.
  • The direction to travel within the map is the highest scoring next area not yet visited as defined by the map.

The above code and processing is very similar to the existing code processing the down facing video macro-blocks to guess the most likely direction moved; as such, it shouldn’t be too hard to prototype.  Initially the map is just dumped to file for viewing the plausibility of this method in an Excel 3D spreadsheet.


P.S. For the record, despite autonomous GPS testing being very limited, because the file-based flight plan works as well or better than the previous version, I’ve unloaded the latest code to GitHub.

Encroaching Hermione’s personal space

With Scanse Sweep installed underneath (yes, she has blind spots from her legs and the WiFi antenna), any object detected between 50cm (the distance to tip of her props) and 1m (her personal space boundary) now triggers a controlled landing.  The same thing would happen if the obstacle wasn’t me approaching her, but instead, her approaching a brick wall: a vertical controlled descent to ground.

There’s a lot more that can be built on this; the Sweep is rotating at 1Hz (it can do up to 10Hz), and its taking about 115 samples per loop, each reporting both the rotation position (azimuth) and distance to the nearest object at that rotation.  Currently the code only collects the shortest distance per loop, and if under 1m, the standard file-based flight plan is replaced with a dynamically created descent flight plan based upon the height that Hermione should have reached at that point with the file-based flight plan.

Here’s the layout of communication between the 5 processes involved:

          +—————+     +—————————+
          |Sweep|——>——|Autopilot|——>——+
          +—————+     +—————————+     |
                                      |
                         +———+     +——————+
                         |GPS|——>——|Motion|
                         +———+     +——————+
                                      |
                          +—————+     |
                          |Video|——>——+
                          +—————+

The latest code updates are on GitHub.

Next step is to move GPS to also feed into Autopilot.  The move is easy, just a couple of minutes to move who starts the GPS process; the difficult bit is how the autopilot should handle that extra information.  Currently the plan is that before a flight, Hermione is taken to the desired end-point of the flight, and she captures the GPS coordinates.  Then she’s moved to somewhere else, and pointing in any direction; on take-off, she finds her current GPS position, and the autopilot builds a dynamic flight plan to the end-point; all the constituent parts of the code are already in place.  It’s just the plumbing that needs careful creation.


P.S. That was the first live test flight, hence the slightly nervous look on my face, and my step backwards once she’d detected my intrusions!

P.P.S: Proof that the abort was triggered courtesy of the logs:

[CRITICAL] (MainThread) fly 3467, ASCENT
[CRITICAL] (MainThread) fly 3467, HOVER
[CRITICAL] (MainThread) fly 3467, ABORT (0.88m)
[CRITICAL] (MainThread) fly 3467, STOP
[CRITICAL] (MainThread) fly 4087, Flight time 16.627974

An uninterrupted flight would have run for 22s where descent would have started at 18s.