“To mow, or not to mow, that is the question:

Whether ’tis easier for the macro-blocks to track
The clovers and daisies of contrasting colour..”

The answer is no, I shouldn’t have mown the lawn.  With the kids’ toys moved out of the way, and any ungreen contrasting features slain, there was nothing to distinguish one blade of shorn grass from another and Hermione drifted wildly.  Reinstating the kids’ chaos restored high quality tracking over 5m.

The point of the flight was to compare GPS versus macro-block lateral tracking.  Certainly over this 5m flight, down-facing video beat GPS hands down:

Lateral tracking

Lateral tracking

My best guess interpretation of the GPS graph is that the flight was actually from the 2 – 7m diagonally heading north west.  The camera POV doesn’t include compass data, so it’s correctly showing her flying forwards by 5m.  The compass code is not working accurately yet – it needs more investigation why not – it was showing ~90° (i.e. East) rather than the true 45° (i.e. North East) shown by the GPS and a handheld compass.

I’ve done some more refinements to scheduling the sensor reads, and also accuracy of GPS data streamed from the GPS process.  It’s worth viewing this graph full screen – each spike shows the time in seconds between motion processing loops – i.e. the time spent processing other sensors – 10ms indicates just IMU data was processed.  The fact no loop takes more than 0.042s* even with full diagnostics running means I could up the sampling rate back to 1kHz – it’s at 500Hz at the moment.  More importantly, it shows processing is nicely spread out and each sensor is getting it’s fair share of the processing and nobody is hogging the limelight.

Motion processing

Motion processing

As a result, I’ve updated the code on GitHub.


*42ms is the point where the IMU FIFO overflows at 1kHz sampling – 512 FIFO size / 12 bytes sample size / 1kHz sampling rate

A busy week…

First, the result: autonomous10m linear flight forwards:

You can see her stabilitydegrade as she leaves the contrasting shadow area cast by the tree branches in the sunshine.  At the point chaos broke loose, she believed she had reached her 10m target and thus she was descending; she’s not far wrong – the start and end points are the two round stones placed a measured 10m apart to within a few centimetres.

So here’s what’s changed in the last week:

  • I’ve added a heatsink to my B3 and changed the base layer of my Pimoroni Tangerine case as the newer one has extra vents underneath to allow better airflow.  The reason here is an attempt to keep a stable temperature, partly to stop the CPU over heating and slowing down, but mostly to avoid the IMU drift over temperature.  Note that normally the Pi3 has a shelf above carrying the large LiPo and acting as a sun-shield – you can just about see the poles that support it at the corners of the Pi3 Tangerine case.

    Hermione's brain exposed

    Hermione’s brain exposed

  • I’ve moved the down-facing Raspberry Pi V2.1 camera and Garmin LiDAR-Lite sensor to allow space for the Scanse Sweep sensor which should arrive in the next week or two courtesy of Kickstarter funding.
    Hermione's delicate underbelly

    Hermione’s delicate underbelly

    The black disk in the middle will seat the Scanse Sweep perfectly; it lifts it above the Garmin LiDAR Lite V3 so their lasers don’t interfere with each other; finally the camera has been moved far away from both to make sure they don’t feature in its video.

  • I’ve changed the fusion code so vertical and lateral values are fused independently; this is because if the camera was struggling to spot motion in dim light, then the LiDAR height was not fused and on some flights Hermione would climb during hover.
  • I’ve sorted out the compass calibration code so Hermione knows which way she’s pointing.  The code just logs the output currently, but soon it will be both fusing with the gyrometer yaw rate, and interacting with the below….
  • I’ve added a new process tracking GPS position and feeding the results over a shared-memory OS FIFO file in the same way the video macro-block are passed across now. The reason both are in their own process is each block reading the sensors – one second for GPS and 0.1 second for video – and that degree of blocking must be completely isolated from the core motion processing.  As with the compass, the GPS data is just logged currently but soon the GPS will be used to set the end-point of a flight, and then, when launched from somewhere away from the target end-point, the combination of compass and GPS together will provide sensor inputs to ensure the flight heads in the right direction, and recognises when it’s reached its goal.

As a result of all the above, I’ve updated GitHub.

Still stuck

Hermione is still causing trouble with yaw control flights despite lots of refinements.  Here’s the latest.

Hermione's troubles

Hermione’s troubles

@3s she’s climbed to about a meter high and then hovered for a second.  All the X, Y, and Z flight plan targets and sensor inputs are nicely aligned.

The ‘fun’ starts at 4 seconds.  The flight plan, written from my point of view says move left by 1m over 4 seconds.  From Hermione’s point of view, with the yaw code in use, this translates to rotate anti-clockwise by 90° while moving forwards by 1m over 4 seconds.  The yaw graph from the sensors shows the ACW rotation is happening correctly.  The amber line in the Y graph shows the left / right distance target from H’s POV is correctly zero.  Similarly, the amber line in the X graph correctly shows she should move forwards by 1m over 4s.  All’s good as far as far as the targets are concerned from her and my POV.

But there’s some severe discrepancy from the sensors inputs POV.  From my POV, she rotated ACW 90° as expected, but then she moved forwards away from me, instead of left.  The blue line on the Y graph (the LiDAR and ground-facing video inputs) confirms this; it shows she moves right by about 0.8m from her POV.  But the rusty terracotta line in the Y graph (the double integrated accelerometer – gravity readings) shows exactly the opposite.  The grey fusion of the amber and terracotta cancel each other out thus following the target perfectly but for completely the wrong reasons.

There are similar discrepancies in the X graph, where the LiDAR + Video blue line is the best match to what I saw: virtually no forward movement from H’s POV except for some slight forward movement after 8s when she should be hovering.

So the net of this?  The LiDAR / Video processing is working perfectly.  The double integrated IMU accelerometer results are wrong, and I need to work out why?  The results shown are taken directly from the accelerometer, and double integrated in excel (much like what the code does too), and I’m pretty convinced I’ve got this right.  Yet more digging to be done.

In other news…

  • Ö has ground facing lights much like Zoe had.  Currently they are always on, but ultimately I intend to use them in various ways such as flashing during calibration etc – this requires a new PCB however to plug a MOSFET gate into a GPIO pin.
  • piNet has changed direction somewhat: I’m testing within the bounds of my garden whether I can define a target destination with GPS, and have enough accuracy for the subsequent flight from elsewhere to get to that target accurately.  This is step one in taking the GPS coordinates of the centre of a maze, and then starting a flight from the edge to get back there.

That’s all for now, folks.  Thanks for sticking with me during these quiet times.


P.S. I’ve got better things to do that worry about why everything goes astray @ 7s, 3s after the yaw to move left started; it’s officially on hold as I’ve other stuff lurking in the background that’s about the flower.

And in other news…

the hedgehogs are back in our garden:

Welcome back!

Welcome back!

This is a combination of the PiNoIR camera (v.1) along with this IR motion detector (the sensor is expensive because it’s digital), and some near-IR LEDs.  Below is a picture of the older model – the newer one runs on an A+ (for size and power reasons) and has been updated to Jessie, and now the LEDs are switchable by GPIO via a mosfet to provide lighting for the camera only when needed.  The upgrade to Jessie means she boots without WiFi at night, and the next morning, I can add the WiFi dongle to connect and check the results.

HogCam

HogCam

Code

#!/usr/bin/env python
# NiteLite - a python daemon process started at system boot, and stopped on shutdown
#          - the default LED pattern is twinkling but if motion is detected, one of 4
#            different patterns are chosen and these are used for 10s after motion detection
#
# Please see our GitHub repository for more information: https://github.com/pistuffing/nitelite/piglow
#

import signal
import time
import RPi.GPIO as GPIO
import os
from datetime import datetime
import subprocess

#------------------------------------------------------------
# Set up the PIR movement detection
#------------------------------------------------------------
GPIO_PIR = 18
GPIO_IR_LED = 12
GPIO.setmode(GPIO.BOARD)
GPIO.setup(GPIO_PIR, GPIO.IN, GPIO.PUD_DOWN)
GPIO.setup(GPIO_IR_LED, GPIO.OUT)
GPIO.output(GPIO_IR_LED, GPIO.LOW)

#------------------------------------------------------------
# Final steps of setup
#------------------------------------------------------------
keep_looping = True

def Daemonize():
	os.setpgrp()

#------------------------------------------------------------
# Once booted, give the user a couple of minutes to place the camera
#------------------------------------------------------------
time.sleep(2 * 60.0)

try:
    while keep_looping:
        #----------------------------------------------------
        # Block waiting for motion detection
        #----------------------------------------------------
        GPIO.wait_for_edge(GPIO_PIR, GPIO.RISING)

        #----------------------------------------------------
        # Turn on the IR LED
        #----------------------------------------------------
        GPIO.output(GPIO_IR_LED, GPIO.HIGH)

        #----------------------------------------------------
        # Take a snap
        #----------------------------------------------------
	now = datetime.now()
	now_string = now.strftime("%y%m%d-%H%M%S")
	camera = subprocess.Popen(["raspistill", "-rot", "180", "-o", "/home/pi/Pictures/img_" + now_string + ".jpg", "-n", "-ISO", "800", "-ex", "night", "-ifx", "none"], preexec_fn =  Daemonize)

        #----------------------------------------------------
        # Turn off the IR LED after 5s
        #----------------------------------------------------
        time.sleep(5)
        GPIO.output(GPIO_IR_LED, GPIO.LOW)

        #----------------------------------------------------
        # Wait 30s before checking for motion again
        #----------------------------------------------------
        time.sleep(30.0)
except KeyboardInterrupt, e:
        pass

GPIO.cleanup()

Save it as /home/pi/hogcam.py and make it executable:

chmod 775 /home/pi/hogcam.py

Load on boot

Raspian uses systemd for running on boot.  In /etc/systemd/system, create a new file called hogcam.service:

[Unit]
Description=HogCam

[Service]
ExecStart=/home/pi/hogcam.py

[Install]
WantedBy=multi-user.target

Save it off.  You can then enable, start and stop it thus.  It will also start on boot.

sudo systemctl enable hogcam.service
sudo systemctl start hogcam.service
sudo systemctl stop hogcam.service

9 flights

For the record, here’s 9 sequential flights, with each lasting 10s: 3s ascent at 0.3m/s, 4s hoverand 3s descent at 0.3m/s.  The drift is different in each.  There are actually only 8 flights in the video; the 9th is not in the video: she never took off and lost WiFi so I had to unplug her and take her indoors.  I may post it tomorrow, though it mostly consists of me grumbling quiets while I pick her up with my arse facing the camera!

Hermione had Garmin and RaspiCam disabled – videoing the ground for lateral tracking is pointless if the height is not accurately known.

On the plus side, the new foam balls did an amazingly successful job of softening some quite high falls.

The GPS + compass plan

My intent with GPS and compass it that Hermione flies from an arbitrary take-off location to a predetermined target GPS location, oriented in the direction she’s flying.

Breaking that down into a little more detail.

  • Turn Hermione on and calibrate the compass, and wait for enough GPS satellites to be acquired.
  • Carry her to the destination landing point and capture the GPS coordinated, saving them to file.
  • Move to a random place in the open flying area and kick off the flight.
  • Before take-off, acquire the GPS coordinates of the starting point, and from that and the target coordinates, get the 3D flight direction vector
  • On takeoff, climb to 1m, and while hovering, yaw to point in the direction of the destination target vector using the compass as the only tool to give a N(X), W(Y), Up(Z) orientation vector – some account needs to be taken for magnetic north (compass) vs. true north (GPS)
  • Once done, fly towards the target, always pointing in the way she’s flying (i.e. yaw target is linked to velocity sensor input), current GPS position changing during the flight always realigning the direction target vector to the destination position.
  • On arrival at the target GPS location, she hovers for a second (i.e. braking) and decends.

There’s a lot of detail hidden in the summary above, not least the fact that GPS provides yet another feed for 3D distance and velocity vectors to be fused with the accelerometer / PiCamera / LiDAR, so I’m going to have to go through it step by step

The first is to fly a square again, but with her oriented to the next direction at the hover, and once moving to the next corner, have yaw follow the direction of movement.  Next comes compass calibration, and flight plan based upon magnetic north west and up.

However, someone’s invoked Murphy’s / Sod’s law on me again: Hermione is seeing the I2C errors again despite no hardware or software changes in this area.  Zoe is behaving better, and I’m trying to double the motion tracking by doubling the video frame rate / sampling rate for the Garmin LiDAR-Lite; the rate change is working for both, but the LiDAR readings see to be duff, reading 60cm when the flight height is less than 10cm.  Grr 🙁

One last sanity check

Before moving on to compass and GPS usage, there’s one last step I want to ensure works: lateral movement.

The flight plan is defined thus:

  • take-off in the center of a square flight plan to about 1m height
  • move left by 50cm
  • move forward by 50cm – this place her in to top left corner of the square
  • move right by 1m
  • move back by 1m
  • move left by 1m
  • move forwards by 50cm
  • move right by 50cm
  • land back at the take-off point.

The result’s not perfect despite running the ground facing camera at 640 x 640 pixels; to be honest, with lawn underneath her, I still think she did pretty well.  She’s still a little lurchy, but I think some pitch / roll rotation PID tuning over the IKEA mat should resolve this quickly.  Once again, you judge whether she achieved this 34 second flight well enough?

Zoe++

Zoe is now running my split cluster gather + process code for the RaspiCam video macro-blocks.  She has super-bright LEDs from Broadcom with ceramic heatsinks so the frame doesn’t melt and she’s running the video at 400 x 400 px at 10fps.

The results?:

And this peops, is nearly a good as it can be without more CPU cores or (heaven forbid) moving away from interpreted CPython to pre-compiled C*.  Don’t get me wrong, I can (will?) probably add minor tweaks to process compass data – the code is already collecting that; adding intentional lateral motion to the flight plan costs absolutely nothing – hover stably in a stable headwind is identical processing to intentional forwards movement in no wind.  But beyond that, I need more CPU cores without significant additional power requirements to support GPS and Scanse Sweep. I hope that’s what the A3 eventually brings.

I’ve updated everything I can on GitHub to represent the current (and perhaps final) state of play.


* That’s not quite true; PyPy is python with a just in time (JIT) compiler. Apparently, it’s the dogs’ bollocks, the mutts’ nuts, the puppies’ plums. Yet when I last tried, it was slower, probably due to the RPi.GPIO and RPIO libraries needed. To integrate those with pypy requires a lot of work which up until now has simply not been necessary.

Better macro-block interpretation

Currently, getting lateral motion from a frame full of macro-blocks is very simplistic:  find the average SAD value for a frame, and then only included those vectors whose SAD is lower.

I’m quite surprised this works as well as it does but I’m fairly sure it can be improved.  There are four factors to the content of a frame of macro-blocks.

  • yaw change: all macro-block vectors will circle around the centre of the frame
  • height change: all macro-blocks vectors will point towards or away from the centre of the frame.
  • lateral motion change: all macro-blocks vectors are pointing in the same direction in the frame.
  • noise: the whole purpose of macro-blocks is simply to find the best matching blocks between two frame; doing this with a chess set (for example) could well have any block from the first frame matching any one of the 50% of the second frame.

Given a frame of macro-blocks, yaw increment between frames can found from the gyro, and thus be removed easily.

The same goes for height too derived from LiDAR.

That leaves either noise or a lateral vector.  By then averaging these values out, we can pick the vectors that are similar to the distance / direction of the average vector. SAD doesn’t come into the matter.

This won’t be my first step however: that’s to work out why the height of the flight wasn’t anything like as stable as I’d been expecting.

Strutting her stuff

Finally, fusion worth showing.

Yes, height’s a bit variable as she doesn’t accurate height readings below about 20cm.

Yes, it’s a bit jiggery because the scale of the IMU and other sensors aren’t quite in sync.

But fundamentally, it works – nigh on zero drift for 20s.  With just the IMU, I couldn’t get this minimal level of drift for more than a few seconds.

Next steps: take her out for a longer, higher flight to really prove how well this is working.