Close, but no cigar; the avoidance direction got ‘H’ too close to the obstacle, triggering the critical landing, but a good first test nonetheless.
…as in the Amazon series “Lucifer”? I’ll stick with Ms. Pitstop despite the colour scheme; Lucifer never shows up on Tuesdays.
She’s still pending the new version of the Garmin LIDAR-Lite v3HP – the lower-profile, higher-accuracy version of Hermione and Zoes’ height tracking LiDAR, She’s also waiting for a new PCB so she can have a buzzer, though that’s not holding her back in the same way. She’ll intentionally not have a Scance Sweep as it’s very very expensive for a non-critical sensor.
My intent had been to make her lower profile, sleek and stealthy to enable longer flights per battery hence the shorter legs, and lower hat and the 13 x 4.4 CF props (compared to ‘H’ 12 x 4.4 Beechwoods). However her hat and feet prevent this – the feet are true lacrosse balls, so heavier than Hermione’s indoor ones, and her salad bowl top also seems heavier. Overall then ‘H’ weighs in at 4.8kg all installed, and Penelope 4.7kg. Thus the main benefit is likely she’ll be nippier due to slightly more power from the lighter, larger CF props combined with the raised centre of gravity. And in fact, this raised CoG and lighter, larger props may well reduce the power needed – we shall see.
In the background, I am working on the “Pond Problem”: fusing GPS distance / direction with the other sensors. Code’s nigh on complete but I’m yet to convince myself it will work well enough to test it immediately over the local gravel lakes.
Here’s what ‘H’ does detecting an obstacles within one meter:
Now it’s time to get her to move around the obstacle and continue to her destination. All sensors are installed and working; this is pure code in the autopilot process*.
Here’s the general idea for how the code should work for full maze exploration:
- Takeoff, then…
- record current GPS location in map dictionary, including previous location (i.e index “here I am” : content “here I came from” in units of meters held in a python dictionary
- do Sweep 360° scan for obstacles
- find next direction based on the current map contents either…
- unexplored unobstructed direction (beyond 2m) biased towards the target GPS point (e.g. the centre of the maze)
- previously visited location marking the current location in the map dictionary as blocked to avoid further return visits
- head off on the new direction until
- obstacle found in the new direction
- unexplored direction (i.e not in the map so far) found
And in fact, this same set of rules are required for just avoiding obstacles, which is good, as I doubt I’ll ever be able to find / build a suitably sized maze, and if I did, the LiPo will run out long before the centre of the maze is reached.
* The fact it’s pure code means it’s going to be quiet on the blogging front apart from GPS tracking videos when the weather warms up. I’m also considering building a new version of Hermione from the spares I have in stock, provisionally called “Penelope”. She’s there for shows only; I can then use Hermione purely for testing new features without worrying about breaking her prior to an event.
It was around Christmas 2012 that I started investigating an RPi drone, and the first post was at the end of January ’13.
5 years later, phase ‘one’ is all but done, barring all but the first as minor, mostly optional extras:
- Track down the GPS tracking instability – best guess is reduced LiPo power as the flight progress in near zero temperature conditions.
- Get Zoe working again – she’s been unused for a while – and perhaps, if possible, add GPS support although this may not be possible because she’s just a single CPU Pi0W
- Fuse the magnometer / gyrometer 3D readings to long term angle stability, particular yaw which has no backup long term sensor beyond the gyro.
- Add a level of yaw control such that ‘H’ always points the way she’s flying – currently she always points in the same direction she took off at. I’ve tried this several times, and it’s always had a problem I couldn’t solve. Third time lucky.
- Upgrade the operating systems to Raspbian Stretch with corresponding requirements for the I2C fix and network WAP / udhcpd / dnsmasq which currently means the OS is stuck with Jessie from the end of February 2017.
- Upgrade camera + lidar 10Hz sampling versus camera 320² pixels versus IMU 500Hz sampling to 20Hz, 480² pixels, 1kHz respectively. However, every previous attempt to update one leads to the scheduling no longer able to process the others – I suspect I’ll need to wait for the Raspberry Pi B 4 or 5 for the increased performance.
Looking into the future…
- Implement (ironically named) SLAM object mapping and avoidance with Sweep, ultimately aimed at maze nativation – just two problems here: no mazes wide enough for ‘H’ clearance, and the AI required to remember and react to explore only unexplored areas in the search for the center.
- Fuse GPS latitude, longitude and altitude / down-facing LiDAR + video / ΣΣ acceleration δt δt fusion for vertical + horizontal distance – this requires further connections between the various processes such that GPS talks to motion process which does the fusion. It enables higher altitude flights where the LiDAR / Video can’t ‘see’ the ground – there are subtleties here swapping between GPS and Video / LiDAR depending whose working best at a given height above the ground based on an some fuzzy logic.
- Use down-facing camera for height and yaw as well as lateral motion – this is more a proof of concept, restrained by the need for much higher resolution videos which current aren’t possible with the RPi B3.
- Find a cold-fusion nuclear battery bank for flight from the Cotswolds, UK to Paris, France landing in Madrid, Spain or Barcelona, Catalonia!
These future aspirations are dreams unlike to become reality either to power supply, CPU performance or WiFi reach. Although a solution to the WiFi range may be solvable now, the other need future technology, at least one of which my not be available within my lifetime :-).
Wishing you all a happy New Year and a great 2018!
First video purely sets some context for the second:
This second is what the post’s about:
So here’s my take on “What was happening”:
The seconds video shows the GPS tracking is essentially working, except for the ‘minor’ fact she completely overran the target landing point, and once again, I ended the flight by encroaching her personal space i.e. the Sweep saw me coming and switched over to an orderly landing.
The problem is, I don’t think the problem’s mine. Two facts to know before going further: the flight was twenty seconds long and GPS updates happen once per second. So walking through the various logs from each process involved…
There’s 18 GPS readings, plus the prerecorded target added to the graph manual by me afterwards. 18 readings is in line with the 20s flight, and the GPS defined distance between take-off and target point is a convincing 2.6m based on what you can see in the video. What’s wrong is that during the flight, those 18 GPS readings returned only 2 values, shown in blue in the graph; they’re in the correct direction compared to the target which is great, but the distance between them is only about 0.27m. This then explains everything that was wrong during the flight: because the GPS readings never got to within 1m of the target the flight continued, and because the 2nd point was in the right direction, the flight went in a straight until I got in the way.
Here’s what the autopilot saw. All that really matters is there were only 2 distinct GPS reading points, and the autopilot passed those two on to the main motion processing process as distance / direction target:
AP: PHASE CHANGE: RTF AP: PHASE CHANGE: TAKEOFF AP: PHASE CHANGE: HOVER AP: # SATS: ... AP: PHASE CHANGE: GPS: WHERE AM I? AP: GPS TRACKING AP: GPS NEW WAYPOINT AP: GPS TRACKING UPDATE AP: PHASE CHANGE: GPS TARGET 3m -151o AP: GPS TRACKING UPDATE AP: GPS TRACKING UPDATE AP: GPS TRACKING UPDATE AP: GPS TRACKING UPDATE AP: GPS TRACKING UPDATE AP: GPS TRACKING UPDATE AP: PHASE CHANGE: GPS TARGET 2m -150o AP: GPS TRACKING UPDATE AP: GPS TRACKING UPDATE AP: GPS TRACKING UPDATE AP: GPS TRACKING UPDATE AP: GPS TRACKING UPDATE AP: GPS TRACKING UPDATE AP: PROXIMITY LANDING AP: PHASE CHANGE: PROXIMITY (0.97m) AP: LANDING COMPLETE AP: FINISHED
Motion processing ignores the distance – it just proceeds at a fixed speed in the direction specified. The flight ends when the GPS process says it is at the target GPS location, so the motion process just keeps moving in the direction defined by autopilot at a fixed speed of 0.3m/s. The down facing video shows this well. Note that the -150° yaw specified by the autopilot matches beautifully with the direction flown based on the gyro (where anti/counter clockwise is positive).
The flight in reality travelled about 3.7m by the time I got in the way; had she received a GPS point saying she’d overshot, she’d have doubled back, but that never happened.
Why didn’t the GPS receiver not see the movement beyond the first 0.27m? I’m adamant it ain’t my fault (for a change), and the GPS receiver is the best I’ve found so far when tested passively. Any ideas anyone?
As a by the by, on the second video, you’ll see both the LiPo (powering the motors) and LiIon (powering the RPi and sensors) now have electronic skiers’ hand / pocket warmers – without these, Hermione struggles to get of the ground, nor read all the sensors now the temperature outside is less than 10°C.
With the car off the drive, and selecting waypoints away from significant obstacles, I got this:
Still not perfect – there’s 2.5m meters between the identical start and end points, but it’s good enough. Waypoint 3 here is probably to blame; I think its distance from waypoint 2 is too far – it’s certainly the most ‘clustered’ point in the test with a tree, a stone wall and overhead telephone wires all within a few meters of it. The first two waypoints will suffice by starting the flight half way between two and three, so the flight goes south for 4.5m before flying ENE by another 4 meters. All 3 points (takeoff and the 2 waypoints) have no obstacles, so Sweep can be enabled to land the test flight if a code or GPS waypoint error brings Hermione to within a couple of meters of an obstacle. That’s what I’ll be trying next.
Once last brain dump before next week’s torture in Disneyland Paris: no, not crashing into inanimate objects; quite the opposite: Simultaneous Location And Mapping i.e. how to map obstacles’ location in space, attempting to avoid them initially through random choice of change in direction, mapping both the object location and the trial-and-error avoidance and in doing so, feeding backing into future less-randomized, more-informed direction changes i.e. a.i.
My plan here, as always, ignores everything described about standard SLAM processes elsewhere and does it my way based upon the tools and restrictions I have:
- SLAM processing is carried out by the autopilot process.
- GPS feeds it at 1Hz as per now.
- Sweep feeds it every time a near obstacle is spotted within a few meters – perhaps 5?
- The map is 0.5m x 0.5m resolution python dictionary indexed by integer units of 1,1 (i.e. twice the distance GPS measurement) into whose value is a score (resolution low due to GPS accuracy and Hermione’s physical size of 1m tip to tip)
- GPS takeoff location = 0,0 on the map
- During the flight, each GPS position is stored in the map location dictionary with a score of +100 points marking out successfully explored locations
- Sweep object detection are also added to the dictionary, up to a limited distance of say 5m (to limit feed from Sweep process and ignore blockages too far away to matter). These have a score of say -1 points due to multiple scans per second, and low res conversion of cm to 0.5m
- Together these high and low points define clear areas passed through and identified obstructions respectively, with unexplored areas having zero value points in the dictionary.
- Height and yaw are fixed throughout the flight to local Sweep and GPS orientation in sync.
- The direction to travel within the map is the highest scoring next area not yet visited as defined by the map.
The above code and processing is very similar to the existing code processing the down facing video macro-blocks to guess the most likely direction moved; as such, it shouldn’t be too hard to prototype. Initially the map is just dumped to file for viewing the plausibility of this method in an Excel 3D spreadsheet.
P.S. For the record, despite autonomous GPS testing being very limited, because the file-based flight plan works as well or better than the previous version, I’ve unloaded the latest code to GitHub.
With Scanse Sweep installed underneath (yes, she has blind spots from her legs and the WiFi antenna), any object detected between 50cm (the distance to tip of her props) and 1m (her personal space boundary) now triggers a controlled landing. The same thing would happen if the obstacle wasn’t me approaching her, but instead, her approaching a brick wall: a vertical controlled descent to ground.
There’s a lot more that can be built on this; the Sweep is rotating at 1Hz (it can do up to 10Hz), and its taking about 115 samples per loop, each reporting both the rotation position (azimuth) and distance to the nearest object at that rotation. Currently the code only collects the shortest distance per loop, and if under 1m, the standard file-based flight plan is replaced with a dynamically created descent flight plan based upon the height that Hermione should have reached at that point with the file-based flight plan.
Here’s the layout of communication between the 5 processes involved:
+—————+ +—————————+ |Sweep|——>——|Autopilot|——>——+ +—————+ +—————————+ | | +———+ +——————+ |GPS|——>——|Motion| +———+ +——————+ | +—————+ | |Video|——>——+ +—————+
The latest code updates are on GitHub.
Next step is to move GPS to also feed into Autopilot. The move is easy, just a couple of minutes to move who starts the GPS process; the difficult bit is how the autopilot should handle that extra information. Currently the plan is that before a flight, Hermione is taken to the desired end-point of the flight, and she captures the GPS coordinates. Then she’s moved to somewhere else, and pointing in any direction; on take-off, she finds her current GPS position, and the autopilot builds a dynamic flight plan to the end-point; all the constituent parts of the code are already in place. It’s just the plumbing that needs careful creation.
P.S. That was the first live test flight, hence the slightly nervous look on my face, and my step backwards once she’d detected my intrusions!
P.P.S: Proof that the abort was triggered courtesy of the logs:
[CRITICAL] (MainThread) fly 3467, ASCENT [CRITICAL] (MainThread) fly 3467, HOVER [CRITICAL] (MainThread) fly 3467, ABORT (0.88m) [CRITICAL] (MainThread) fly 3467, STOP [CRITICAL] (MainThread) fly 4087, Flight time 16.627974
An uninterrupted flight would have run for 22s where descent would have started at 18s.
By human measurements, the distance was about 7m at about 45° (i.e NE). GPS says 8.6m, video camera tracking says 5 which is the flight plan defined length to travel.
It was never going to be perfect due to the difference between magnetic and true north, the resolution of GPS of around 1m, and how video distance tracking will always be a best guess, but it’s more than good enough for my cunning plan to work.
However, the plan’s taking a premature diversion; during this test, I was less careful and she ended up (in vertical descent mode) clipping 5 props against the drive stone wall. Next step (after replacing the props!) is now to deploy my Scanse Sweep code which will trigger an orderly landing if any object is detected less than 1.5m away – Hermione’s radius is 50cm prop tip to tip diagonally so that’s 1m clearance.
One interesting point: the compass readings are mostly in a very dense cluster, with just a few (relatively) pointing in very different directions – that’s as Hermione passed the family car!
When I first added the autopilot process, it would update the main process at 100Hz with the current distance vector target; the main process couldn’t quite keep up with what it was being fed, but it was close. The down side was the video processing dropped rate through the floor, building up a big backlog, meaning there was a very late reaction to lateral drift.
So I changed the autopilot process to only send velocity vector targets; that meant autopilot sent an update to the main process every few seconds (i.e. ascent, hover, descent and stop updates) rather than 100 times a second for the distance increments; as a result, video processing was running at full speed again.
But when I turned on diagnostics, the main process can’t keep up with the autopilot despite the fact they are only send once every few seconds. A print to screen the messages showed they were being sent correctly, but the main process’ select() didn’t pick them up: in a passive flight, it stayed at a fixed ascent velocity for ages – way beyond the point the autopilot prints indicated the hover, descent and stop messages had been sent . Without diagnostics, the sending and receipt of the messages were absolutely in sync. Throughout all this, the GPS and video processes’ data rates to the main process were low and worked perfectly.
The common factor between autopilot, GPS, video and diagnostics is that they use shared memory files to store / send their data to the main processor; having more than one with high demand (autopilot at 100Hz distance target or diagnostics at 100Hz) seemed to be the cause for one of the lower frequency shared memory sources simply to not be spotted as far as the main process’ select() was concerned. I have no idea why this happens and that troubles me.
This useful link shows the tools to query shared memory usage stats.
df -k /dev/shm shows only 1% shared memory is used during a flight
Filesystem 1K-blocks Used Available Use% Mounted on tmpfs 441384 4 441380 1% /dev/shm
ipcs -pm shows the processes owning the shared memory:
------ Shared Memory Creator/Last-op PIDs -------- shmid owner cpid lpid 0 root 625 625 32769 root 625 625 65538 root 625 625 98307 root 625 625 131076 root 625 625
ps -eaf | grep python shows the processes in use by Hermione. Note that none of these’ process IDs are in the list of shared memory owners above:
root 609 599 0 15:43 pts/0 00:00:00 sudo python ./qc.py root 613 609 12 15:43 pts/0 00:02:21 python ./qc.py root 624 613 0 15:43 pts/0 00:00:03 python /home/pi/QCAPPlus.pyc GPS root 717 613 1 16:00 pts/0 00:00:01 python /home/pi/QCAPPlus.pyc MOTION 800 800 10 root 730 613 14 16:01 pts/0 00:00:00 python /home/pi/QCAPPlus.pyc AUTOPILOT fp.csv 100
Oddly, it’s the gps daemon with the shared memory creator process ID:
gpsd 625 1 4 15:43 ? 00:01:00 /usr/sbin/gpsd -N /dev/ttyGPS
I’m not quite sure yet whether there’s anything wrong here.
I could just go ahead with object avoidance; the main process would only have diagnostics as it’s main high speed shared memory usage. Autopilot can maintain the revised version of ony sending low frequency velocity vector target changes. Autopilot would get high frequency input from the Sweep, but convert that to changes of low frequency velocity targets sent to the main process. This way, main has only diagnostics, and autopilot only has sweep as fast inputs. This is a speculative solution. But I don’t like the idea of moving forward with an undiagnosed weird problem.