First, the result: autonomous10m linear flight forwards:
You can see her stabilitydegrade as she leaves the contrasting shadow area cast by the tree branches in the sunshine. At the point chaos broke loose, she believed she had reached her 10m target and thus she was descending; she’s not far wrong – the start and end points are the two round stones placed a measured 10m apart to within a few centimetres.
So here’s what’s changed in the last week:
- I’ve added a heatsink to my B3 and changed the base layer of my Pimoroni Tangerine case as the newer one has extra vents underneath to allow better airflow. The reason here is an attempt to keep a stable temperature, partly to stop the CPU over heating and slowing down, but mostly to avoid the IMU drift over temperature. Note that normally the Pi3 has a shelf above carrying the large LiPo and acting as a sun-shield – you can just about see the poles that support it at the corners of the Pi3 Tangerine case.
- I’ve moved the down-facing Raspberry Pi V2.1 camera and Garmin LiDAR-Lite sensor to allow space for the Scanse Sweep sensor which should arrive in the next week or two courtesy of Kickstarter funding.
The black disk in the middle will seat the Scanse Sweep perfectly; it lifts it above the Garmin LiDAR Lite V3 so their lasers don’t interfere with each other; finally the camera has been moved far away from both to make sure they don’t feature in its video.
- I’ve changed the fusion code so vertical and lateral values are fused independently; this is because if the camera was struggling to spot motion in dim light, then the LiDAR height was not fused and on some flights Hermione would climb during hover.
- I’ve sorted out the compass calibration code so Hermione knows which way she’s pointing. The code just logs the output currently, but soon it will be both fusing with the gyrometer yaw rate, and interacting with the below….
- I’ve added a new process tracking GPS position and feeding the results over a shared-memory OS FIFO file in the same way the video macro-block are passed across now. The reason both are in their own process is each block reading the sensors – one second for GPS and 0.1 second for video – and that degree of blocking must be completely isolated from the core motion processing. As with the compass, the GPS data is just logged currently but soon the GPS will be used to set the end-point of a flight, and then, when launched from somewhere away from the target end-point, the combination of compass and GPS together will provide sensor inputs to ensure the flight heads in the right direction, and recognises when it’s reached its goal.
As a result of all the above, I’ve updated GitHub.