While waiting for half-decent weather, I’ve been tinkering and tweaking to make sure everything is as good as possible before moving on to the GPS tracking code. This is the result: a 2m square.
With Scanse Sweep installed underneath (yes, she has blind spots from her legs and the WiFi antenna), any object detected between 50cm (the distance to tip of her props) and 1m (her personal space boundary) now triggers a controlled landing. The same thing would happen if the obstacle wasn’t me approaching her, but instead, her approaching a brick wall: a vertical controlled descent to ground.
There’s a lot more that can be built on this; the Sweep is rotating at 1Hz (it can do up to 10Hz), and its taking about 115 samples per loop, each reporting both the rotation position (azimuth) and distance to the nearest object at that rotation. Currently the code only collects the shortest distance per loop, and if under 1m, the standard file-based flight plan is replaced with a dynamically created descent flight plan based upon the height that Hermione should have reached at that point with the file-based flight plan.
Here’s the layout of communication between the 5 processes involved:
+—————+ +—————————+ |Sweep|——>——|Autopilot|——>——+ +—————+ +—————————+ | | +———+ +——————+ |GPS|——>——|Motion| +———+ +——————+ | +—————+ | |Video|——>——+ +—————+
The latest code updates are on GitHub.
Next step is to move GPS to also feed into Autopilot. The move is easy, just a couple of minutes to move who starts the GPS process; the difficult bit is how the autopilot should handle that extra information. Currently the plan is that before a flight, Hermione is taken to the desired end-point of the flight, and she captures the GPS coordinates. Then she’s moved to somewhere else, and pointing in any direction; on take-off, she finds her current GPS position, and the autopilot builds a dynamic flight plan to the end-point; all the constituent parts of the code are already in place. It’s just the plumbing that needs careful creation.
P.S. That was the first live test flight, hence the slightly nervous look on my face, and my step backwards once she’d detected my intrusions!
P.P.S: Proof that the abort was triggered courtesy of the logs:
[CRITICAL] (MainThread) fly 3467, ASCENT [CRITICAL] (MainThread) fly 3467, HOVER [CRITICAL] (MainThread) fly 3467, ABORT (0.88m) [CRITICAL] (MainThread) fly 3467, STOP [CRITICAL] (MainThread) fly 4087, Flight time 16.627974
An uninterrupted flight would have run for 22s where descent would have started at 18s.
Given her previous hover flight was so good, I couldn’t resist a 10m flight directly away from me. She overshot the 10m target significantly, probably due to being unable to track the ground motion in the dust storm section. I killed the flight before she hit the wall. Nevertheless, this confirms that she’s good to go with GPS tracking, firstly where she’s going, and next, actually defining where she should be going, based on a preset waypoint collected prior to a flight.
As a result, I’ve updated the code on GitHub.
the unexpected hits you between the eyes* (no, not literally!).
She’s sans chapeau as this was just a quick test. However, the flight was quite surprising so I thought I’d share: I’ve been tinkering with the scheduling of video processing, GPS, autopilot and the main sensor processing; I’d spotted these were getting out of sync, the primary reason being reading the autopilot and GPS OS FIFO shared memory data streams often enough for them to stay in sync, yet not too often that the read() blocked. The trivial drift over this 23s hover proved this is working – double integrated acceleration can only hold back drift for a second or two. What surprised me though is that this level of stability took place over gravel, and on checking the config, it became even more of a surprise: the video was running 320 x 320 pixels at 10Hz at 50% contrast levels and it worked brilliantly. I’d assumed higher resolution was needed and that she’d been flying at 640 x 640 (i.e. 4 times the resolution) and at that level she was both drifting, and struggling to process the video frames fast enough. I’m finally at a confident place that I can now move GPS to feed the autopilot such that the autopilot can direct the core motion processing where to go.
*courtesy of Cilla Black
Flight plan processing has now been moved to a new autopilot process ultimately with the aim of feeding Sweep object detection into the autopilot, which in turn swaps from the current flight plan to an ’emergency landing’ flight plan, updating the main process to hover and descend to ground. This flight proves the autopilot feed works, though does raise other problems, primarily the drift over the course of a 3 second takeoff, 5 second hover and 4 second land.
I suspect the drift is caused by video processing: it was processing the 10Hz output at 7Hz, leading to lag. This reduced processing speed may be caused by overheating of the RPi CPU, causing it to throttle back.
in /boot/config.txt to make sure the CPU runs at fully speed always, rather than when it ‘thinks’ it’s busy. This, combined with the fact the temperature today is 24° in the shade may well be enough for the CPU to throttle back due to overheating instead. Certainly just a few days ago when the outdoor temperature was in the mid-teens and a cool breeze was blowing, there was no drift. The CPU has a heatsink, but it is a case. I may have to either remove the case (exposing it to the elements) or add a cooling fan to the case. If possible, the latter would be better as the case is there to provide a constant temperature to IMU temperature drift. I’ve found this one which might just fit in.
Note the kids’ pétanque kit is deliberately on the ground to provide weighty contrasting areas that the grass no longer supplies to the video. Off to mow the lawn now.
Better autonomous flight from Hermione than my flight of the DJI Mavic videoing!
Code updated on GitHub.
First, the result: autonomous10m linear flight forwards:
You can see her stabilitydegrade as she leaves the contrasting shadow area cast by the tree branches in the sunshine. At the point chaos broke loose, she believed she had reached her 10m target and thus she was descending; she’s not far wrong – the start and end points are the two round stones placed a measured 10m apart to within a few centimetres.
So here’s what’s changed in the last week:
- I’ve added a heatsink to my B3 and changed the base layer of my Pimoroni Tangerine case as the newer one has extra vents underneath to allow better airflow. The reason here is an attempt to keep a stable temperature, partly to stop the CPU over heating and slowing down, but mostly to avoid the IMU drift over temperature. Note that normally the Pi3 has a shelf above carrying the large LiPo and acting as a sun-shield – you can just about see the poles that support it at the corners of the Pi3 Tangerine case.
- I’ve moved the down-facing Raspberry Pi V2.1 camera and Garmin LiDAR-Lite sensor to allow space for the Scanse Sweep sensor which should arrive in the next week or two courtesy of Kickstarter funding.
The black disk in the middle will seat the Scanse Sweep perfectly; it lifts it above the Garmin LiDAR Lite V3 so their lasers don’t interfere with each other; finally the camera has been moved far away from both to make sure they don’t feature in its video.
- I’ve changed the fusion code so vertical and lateral values are fused independently; this is because if the camera was struggling to spot motion in dim light, then the LiDAR height was not fused and on some flights Hermione would climb during hover.
- I’ve sorted out the compass calibration code so Hermione knows which way she’s pointing. The code just logs the output currently, but soon it will be both fusing with the gyrometer yaw rate, and interacting with the below….
- I’ve added a new process tracking GPS position and feeding the results over a shared-memory OS FIFO file in the same way the video macro-block are passed across now. The reason both are in their own process is each block reading the sensors – one second for GPS and 0.1 second for video – and that degree of blocking must be completely isolated from the core motion processing. As with the compass, the GPS data is just logged currently but soon the GPS will be used to set the end-point of a flight, and then, when launched from somewhere away from the target end-point, the combination of compass and GPS together will provide sensor inputs to ensure the flight heads in the right direction, and recognises when it’s reached its goal.
As a result of all the above, I’ve updated GitHub.
The sun wasn’t shining brightly, so no high contrast on the lawn; the lawn had been mowed, removing the contrasting grass clumps too. Yet she still did a great attempt at a 1m square. I think this is about as good as she can get – it’s time for me to move on to adding compass, GPS and object avoidance. The code as been updated on GitHub.
I chose to name my piDrones Phoebe, Chloe and Zoe as they can all be spelt with an umlaut – don’t ask me why, I have no idea. I ran out of umlaut names (despite research) so I opted for Hermione as the latest, greatest model as it sounds similar although she can’t ever bear an umlaut as she lacks the critical ‘oe’.
Anyway, Phoebe, Zoe and the always short-lived* Chloe have all been merged into the best of each; the result is ‘Ö’ pronounced like the french for ‘yes’. She has Phoebe’s ESCs, motors and props, Chloe’s amazing frame, and Zoe’s Pi0W and PCB.
Ö’s build is virtually indestructible as she weighs just 1kg fully loaded. Because she’s so light, crash torques are tiny compared to the strength of the frame; the only perceivable damage is to broken props and these are cheap from e-bay and I already have a vast stock of them. In comparison, Hermione weighs 4kg; this, and the fact she’s so large means crash torque is huge in comparison, damage always occurs for anything but a perfect landing, and replacement frame parts and props is expensive. Ultimately I still want to have Hermione as queen piDrone because of her X8 format, and use of a B3 4-cores allowing further sensors**, but while I’m still diagnosing the current problems, I think little miss indestructible is better suited financially to the task-in-hand.
Sadly, there’s one problem; Ö’s Pi0W isn’t fast enough to cope with video resolution higher than about 400² pixels, ruling out lawn / gravel etc. This is what she can do successfully:
On the plus side, I think that’s just enough to sort out my understanding of Hermione’s yaw flaws.
*Chloe got retired (again) as the 1.5A regulator on the PCB was insufficient to drive the A+, IMU, Camera and LiDAR-Lite. The same I2C errors I have before returned. Swapping Chloe’s A+ to Ö’s Pi0W resolved this.
** i.e. GPS, compass and Scanse Sweep
She would have drawn a better square had I got the flight plan right; it the event, the plan said to…
- climb to 90cm over 3s
- hover for a second
- move forward by 1m over 4s
- hover for a second
- move left by 1m over 4s
- hover for a second
- move back by 2m over 8s
- hover for a second
- move right by 2m over 8s
- hover for a second
- land over 4s…
making a total of 36 seconds in all.
These last two sections meant she should land about a meter back and right from where she took off. How well did she follow the flawed flight plan?
To me, this is working amazingly well, especially as the camera lateral tracking doesn’t have any significant markers, just grass blades. I was lucky there was bright sunshine.
What I’d really like to have shown was her actually ‘turning’ each corner, always facing the direction she’s flying; this is completely unnecessary but would look good – the only point of doing it is if there’s a camera taking pics and streaming video live back to the RC as per my Mavic. But currently my yaw code is still lacking something and I don’t know what yet.