Sorry it’s been quiet; the weather’s been awful, so no videos to show. Instead, I’ve been tinkering to ensure the code is as good as it can be prior to moving on to object avoidance / maze tracking.
Zoe is back to life once more to help with the “face the direction you’re flying” tweak testing as these don’t quite work yet. She’s back as my first few attempts with ‘H’ kept breaking props. First job for ‘Z’ was to have her run the same code as Hermione but with the autopilot moved back to inline to reduce the number of processes as possible for her single CPU Pi0W in comparison with Hermione’s 4 CPU 3B.
- I’ve started running the code with ‘sudo python -O ./qc.py’ to enable optimisation. This disable assertion checking, and hopefully other stuff for better performance.
- I’ve tweaked the Butterworth parameters to track gravity changes faster as Zoe’s IMU is exposed to the cold winds and her accelerometer values rise rapidly.
- I’ve refining the Garmin LiDAR-Lite V3 to cope with occasional ‘no reading’ triggered caused by no laser reflection detected; this does happen occasionally (and correctly) if she’s tilted and the surface bouncing the laser points the wrong way.
- I’d also hoped to add a “data ready interrupt” to the LiDAR to reduce the number of I2C requests made; however, the interrupts still don’t happens despite trying all 12 config options. I think the problem is Garmin’s so I’m awaiting a response from them on whether this flaw is fixed in a new model to be launched in the next few weeks . In the meantime, I only call the GLL I2C when there’s video results which need the GLL vertical height to convert the video pixel movements into horizontal movement in meters.
Having added and tested all the above sequentially, the net result was failure: less bad a failure than previously, but failure nonetheless; the video tracking lagged in order to avoid the IMU FIFO overflowing. So in the end, I changed Zoe’s video resolution to 240 pixels² @ 10 fps (‘H’ is at 320 pixel² @ 10 fps, and she now can hover on the grass which means I can get on with the “face where you’re going” code.
I do think all the other changes listed are valid and useful, and as a result, I’ve updated the code on GitHub.
In passing, I had also been investigating whether the magnetometer could be used to back up pitch, roll and yaw angles long term, but that’s an abject failure; with the props on idle prior to takeoff, it works fine giving the orientation to feed to the GPS tracking process, but once airborne, the magnetometer values shift by ±40° and varies depending which way she’s going while facing in the same direction.
It was around Christmas 2012 that I started investigating an RPi drone, and the first post was at the end of January ’13.
5 years later, phase ‘one’ is all but done, barring all but the first as minor, mostly optional extras:
- Track down the GPS tracking instability – best guess is reduced LiPo power as the flight progress in near zero temperature conditions.
- Get Zoe working again – she’s been unused for a while – and perhaps, if possible, add GPS support although this may not be possible because she’s just a single CPU Pi0W
- Fuse the magnometer / gyrometer 3D readings to long term angle stability, particular yaw which has no backup long term sensor beyond the gyro.
- Add a level of yaw control such that ‘H’ always points the way she’s flying – currently she always points in the same direction she took off at. I’ve tried this several times, and it’s always had a problem I couldn’t solve. Third time lucky.
- Upgrade the operating systems to Raspbian Stretch with corresponding requirements for the I2C fix and network WAP / udhcpd / dnsmasq which currently means the OS is stuck with Jessie from the end of February 2017.
- Upgrade camera + lidar 10Hz sampling versus camera 320² pixels versus IMU 500Hz sampling to 20Hz, 480² pixels, 1kHz respectively. However, every previous attempt to update one leads to the scheduling no longer able to process the others – I suspect I’ll need to wait for the Raspberry Pi B 4 or 5 for the increased performance.
Looking into the future…
- Implement (ironically named) SLAM object mapping and avoidance with Sweep, ultimately aimed at maze nativation – just two problems here: no mazes wide enough for ‘H’ clearance, and the AI required to remember and react to explore only unexplored areas in the search for the center.
- Fuse GPS latitude, longitude and altitude / down-facing LiDAR + video / ΣΣ acceleration δt δt fusion for vertical + horizontal distance – this requires further connections between the various processes such that GPS talks to motion process which does the fusion. It enables higher altitude flights where the LiDAR / Video can’t ‘see’ the ground – there are subtleties here swapping between GPS and Video / LiDAR depending whose working best at a given height above the ground based on an some fuzzy logic.
- Use down-facing camera for height and yaw as well as lateral motion – this is more a proof of concept, restrained by the need for much higher resolution videos which current aren’t possible with the RPi B3.
- Find a cold-fusion nuclear battery bank for flight from the Cotswolds, UK to Paris, France landing in Madrid, Spain or Barcelona, Catalonia!
These future aspirations are dreams unlike to become reality either to power supply, CPU performance or WiFi reach. Although a solution to the WiFi range may be solvable now, the other need future technology, at least one of which my not be available within my lifetime :-).
Wishing you all a happy New Year and a great 2018!
I think I’ve finished writing the code to support GPS tracking: a target GPS location is stored prior to the flight; the flight takes off pointing in any direction, and once it has learned it’s own GPS takeoff position, it heads towards the target, updating the direction to the target each time it gets a new GPS fix for where it is, and finally landing when it’s current position matches the target GPS position.
There’s a lot of testing to be done before I can try it live. The new autopilot code is pretty much a rewrite from scratch. It now has multiple partial flight plans:
- file defined direction + speed + time
- GPS initial location finding
- GPS tracking to destination
- Scanse abort.
It switches between these depending on either external Scanse or GPS inputs, or the flight plan section completing e.g. the GPS flight plan reaches its destination or the file-based flight plan has completed the various time-based lateral movements and hovers (both of which will then swap to the landing flight plan). Luckily, most of this testing can be carried out passively.
The first test preempts all of these; as per yesterday’s post, I should be able to use my magnetometer combined with its magnetic declination to find Hermione’s initial orientation compared with true (i.e. GPS) north. At the same time, I could use the magnetometer to provide long term yaw values to fused with the integrated yaw rate from the gyrometer. Here’s what I got from two sequential passive tests:
Compass & integrated Gyro yaw
I’m not hugely impressed with the results of either.
- The difference between compass and integrated gyro yaw doesn’t match as tightly as I was expecting – in its current state, I won’t be using the compass direction as a long term fuse with the integrated gyro yaw unless I can improve this.
- The blobs in the lower pair are compass orientation values as she’s sitting on the ground – half way through I rotate her clockwise roughly 90°. The rotation angle is pretty good as are the direction (NW -> NE -> SE, but I don’t like the distributed density of the blobs as she sat still on the ground at each location – I think I’ll have to use an average value for the starting orientation value passed to the autopilot for GPS tracking processing.
Distance + Direction = GPS
By human measurements, the distance was about 7m at about 45° (i.e NE). GPS says 8.6m, video camera tracking says 5 which is the flight plan defined length to travel.
It was never going to be perfect due to the difference between magnetic and true north, the resolution of GPS of around 1m, and how video distance tracking will always be a best guess, but it’s more than good enough for my cunning plan to work.
However, the plan’s taking a premature diversion; during this test, I was less careful and she ended up (in vertical descent mode) clipping 5 props against the drive stone wall. Next step (after replacing the props!) is now to deploy my Scanse Sweep code which will trigger an orderly landing if any object is detected less than 1.5m away – Hermione’s radius is 50cm prop tip to tip diagonally so that’s 1m clearance.
One interesting point: the compass readings are mostly in a very dense cluster, with just a few (relatively) pointing in very different directions – that’s as Hermione passed the family car!
So there I was this morning on the bus heading to Cheltenham to co-host the Cotswold Raspberry Jam, gazing at the beautiful Cotswold countryside, and my mind began to wander – well actually it was unleashed from the constraints of dealing with details it couldn’t fix while on a bus. And out came a better idea about where to go for the next step to autonomy.
What’s been bugging me is the difference between magnetic and GPS North. The ‘bus’ solution is to ignore magnetic north completely. The compass still has a role to play maintaining zero yaw throughout a long flight, but it plays no part in the direction of flight – in fact, this is mandatory for the rest of the plan to work.
Instead, the autopilot translates between GPS north, south, east and west coordinate system and Hermione’s forward, backwards, right and left coordinate. The autopilot only speaks to Hermione in her own coordinates when telling her where to go. The autopilot learns Hermione’s coordinate system at the start of each flight by asking her to fly forwards a meter or so, and comparing that to the GPS vector change it gets. This rough translation is refined continuously throughout the flight. Hermione can start a flight pointing in any direction compared to the target GPS point, and the autopilot will get her to fly towards the GPS target regardless of the way she’s pointing.
Sadly now, I’m back from today’s Jam, and the details confront me once more, but the bus ride did yield a great view of where to go next.
P.S. As always, the jam was great; over 100 parents and kids, lots of cool things for them to see and do, including a visit this time by 2 members of local BBC micro:bit clubs. Next one is 30th September.
First, the result: autonomous10m linear flight forwards:
You can see her stabilitydegrade as she leaves the contrasting shadow area cast by the tree branches in the sunshine. At the point chaos broke loose, she believed she had reached her 10m target and thus she was descending; she’s not far wrong – the start and end points are the two round stones placed a measured 10m apart to within a few centimetres.
So here’s what’s changed in the last week:
As a result of all the above, I’ve updated GitHub.
Here’s Hermione with her new PCB. It’s passed the passive tests; next step is to make sure each of the 8 motor ESCs are connected the right way to the respective PWM output on the PCB, and finally, I’ll do a quick flight with only the MPU-9250 as the sensors to tune the X8 PID gains. Then she’s getting shelved.
Zoe’s getting a new PCB so I can run the camera and Garmin LiDAR-Lite V3 on her too. Hermione is huge compared to Zoe, and with the winter weather setting in, I’m going to need a system that’s small enough to test indoors.
Hermione will still be built – I need her extra size to incorporate the Scance Sweep and GPS, but I suspect only when an A3 arrives on the market – Hermione’s processing with a new 512MB A+ overclocked to 1GHz is nearly maxed out with the camera and diagnostics. She’s probably just about got CPU space for the compass and Garmin LiDAR lite over I2C but I think that’s it until the A3 comes to market. My hope for the A3 is that it uses the same 4 core CPU as the B2 with built in Bluetooth and WiFi as per the B3 but no USB / ethernet hub to save power. Fingers crossed.
I know I said about using this post to investigate that value of 1.2, but I’m just going to sit on that for now in preference for yaw. There are a few aspects to this:
- During flight, currently the quadcopter stays facing whichever way it was facing on the ground; there’s a yaw angle PID and it’s target is 0. But should be trivial to change this yaw angle target so that the quadcopter faces the direction; the target for the yaw angle is derived from the velocity – either input or target to the velocity PID i.e. the way the quadcopter should or is flying. It’s a little bit tricker than it sounds for two reasons:
- The tan (velocity vectors) gives 2 possible angle and only consideration of signs of both vectors actually defines the absolute direction e.g. (1,1) and (-1,-1) needs to be 45° and 135° respectively.
- Care needs to be taken that the transition from one angle to the next goes the shortest way, and when flight transitions from movement to hover, the quad doesn’t rotate back to the takeoff orientation due to the velocity being 0,0 – the angle needs to be derived from what it was doing before it stopped.
It’s also worth noting this is only for looks and there are no flight benefits from doing this.
- The camera X and Y values are operating partially in the quadframe and partially in the earth frame. I need to rotate the X and Y values totally into the earth frame by accounting for yaw.
- Yaw tracking by the integrated gyro Z axis seems to be very stable, but I do need to include the compass readings for even longer term stability. I think I can get away with just using the compass X and Y values to determine the yaw angle but I’ll need to test this, but I have 2 further concerns:
- the first is that the compass needs calibration each time it boots up, just like is necessary with your phones. You can see from my previous post the offsets of the X and Y values as I span Zoe on my vinyl record player – see the circle is not centered on 0, 0.
- I’m not sure how much iron in the flight zone will affect the calibrations based on the distance of the compass from the iron; iron in my test area may be the structural beams inside the walls of a building indoors, or garden railings outside, for example.
First step is to include yaw into the camera processing as a matter of urgency. The magnetometer stuff can once more wait until it becomes absolutely necessary.
FYI the rotation matrix from Earth to Quadcopter from is as follows:
|xe| | cos ψ, sin ψ, 0| |xq|
|ye| = |-sin ψ, cos ψ, 0| |yq|
|ze| | 0, 0, 1| |zq|
Magnetometer X & Y
This is roughly a fully circle rotation of Zoe’s magnetometer sitting on my record player. The next step seems to be magnetometer calibration – it’s a common topic for me to investigate.
Well, tangerine actually!
With the IMU FIFO code taking the pressure off critical timings for reading the IMU, a huge range of options have opened up for what to do next with the spare time the FIFO has created:
- A simple keyboard remote control is tempting where the QC code polls stdin periodically during a hover phase and amends the flight plan dynamically; ultimately, I’d like to do this via a touch screen app on my Raspberry Pi providing joystick buttons. However for this to work well, I really need to add further sensor input providing feedback on longer-term horizontal and vertical motion…
- A downward facing Ultrasonic Range Finder (URF) would provide the vertical motion tracking when combined with angles from the IMU. I’d looked at this a long time ago but it stalled as I’d read the sensors could only run at up to 100kbps I2C baudrate which would prevent use of the higher 400kbps required for reading the FIFO. However a quick test just now shows the URF working perfectly at 400kbps.
- A downward facing RPi camera when combined with the URF would provide horizontal motion tracking. Again I’d written this off due to the URF, but now it’s worth progressing with. This is the Kitty++ code I started looking at during the summer and abandoned almost immediately due both to the lack of spare time in the code, and also the need for extra CPU cores to do the camera motion processing; Chloe with her Raspberry Pi B2 and her tangerine PiBow case more than satisfy that requirement now.
- The magnetometer / compass on the IMU can provide longer term yaw stability which currently relies on just the integrated Z-axis gyro.
I do also have barometer and GPS sensors ‘in-stock’ but their application is primarily for long distance flights over variable terrain at heights above the range of the URF. This is well out of scope for my current aims, so for the moment then, I’ll leave the parts shelved.
I have a lot of work to do, so I’d better get on with it.