Still stuck

Hermione is still causing trouble with yaw control flights despite lots of refinements.  Here’s the latest.

Hermione's troubles

Hermione’s troubles

@3s she’s climbed to about a meter high and then hovered for a second.  All the X, Y, and Z flight plan targets and sensor inputs are nicely aligned.

The ‘fun’ starts at 4 seconds.  The flight plan, written from my point of view says move left by 1m over 4 seconds.  From Hermione’s point of view, with the yaw code in use, this translates to rotate anti-clockwise by 90° while moving forwards by 1m over 4 seconds.  The yaw graph from the sensors shows the ACW rotation is happening correctly.  The amber line in the Y graph shows the left / right distance target from H’s POV is correctly zero.  Similarly, the amber line in the X graph correctly shows she should move forwards by 1m over 4s.  All’s good as far as far as the targets are concerned from her and my POV.

But there’s some severe discrepancy from the sensors inputs POV.  From my POV, she rotated ACW 90° as expected, but then she moved forwards away from me, instead of left.  The blue line on the Y graph (the LiDAR and ground-facing video inputs) confirms this; it shows she moves right by about 0.8m from her POV.  But the rusty terracotta line in the Y graph (the double integrated accelerometer – gravity readings) shows exactly the opposite.  The grey fusion of the amber and terracotta cancel each other out thus following the target perfectly but for completely the wrong reasons.

There are similar discrepancies in the X graph, where the LiDAR + Video blue line is the best match to what I saw: virtually no forward movement from H’s POV except for some slight forward movement after 8s when she should be hovering.

So the net of this?  The LiDAR / Video processing is working perfectly.  The double integrated IMU accelerometer results are wrong, and I need to work out why?  The results shown are taken directly from the accelerometer, and double integrated in excel (much like what the code does too), and I’m pretty convinced I’ve got this right.  Yet more digging to be done.

In other news…

  • Ö has ground facing lights much like Zoe had.  Currently they are always on, but ultimately I intend to use them in various ways such as flashing during calibration etc – this requires a new PCB however to plug a MOSFET gate into a GPIO pin.
  • piNet has changed direction somewhat: I’m testing within the bounds of my garden whether I can define a target destination with GPS, and have enough accuracy for the subsequent flight from elsewhere to get to that target accurately.  This is step one in taking the GPS coordinates of the centre of a maze, and then starting a flight from the edge to get back there.

That’s all for now, folks.  Thanks for sticking with me during these quiet times.


P.S. I’ve got better things to do that worry about why everything goes astray @ 7s, 3s after the yaw to move left started; it’s officially on hold as I’ve other stuff lurking in the background that’s about the flower.

And in other news…

the hedgehogs are back in our garden:

Welcome back!

Welcome back!

This is a combination of the PiNoIR camera (v.1) along with this IR motion detector (the sensor is expensive because it’s digital), and some near-IR LEDs.  Below is a picture of the older model – the newer one runs on an A+ (for size and power reasons) and has been updated to Jessie, and now the LEDs are switchable by GPIO via a mosfet to provide lighting for the camera only when needed.  The upgrade to Jessie means she boots without WiFi at night, and the next morning, I can add the WiFi dongle to connect and check the results.

HogCam

HogCam

Code

#!/usr/bin/env python
# NiteLite - a python daemon process started at system boot, and stopped on shutdown
#          - the default LED pattern is twinkling but if motion is detected, one of 4
#            different patterns are chosen and these are used for 10s after motion detection
#
# Please see our GitHub repository for more information: https://github.com/pistuffing/nitelite/piglow
#

import signal
import time
import RPi.GPIO as GPIO
import os
from datetime import datetime
import subprocess

#------------------------------------------------------------
# Set up the PIR movement detection
#------------------------------------------------------------
GPIO_PIR = 18
GPIO_IR_LED = 12
GPIO.setmode(GPIO.BOARD)
GPIO.setup(GPIO_PIR, GPIO.IN, GPIO.PUD_DOWN)
GPIO.setup(GPIO_IR_LED, GPIO.OUT)
GPIO.output(GPIO_IR_LED, GPIO.LOW)

#------------------------------------------------------------
# Final steps of setup
#------------------------------------------------------------
keep_looping = True

def Daemonize():
	os.setpgrp()

#------------------------------------------------------------
# Once booted, give the user a couple of minutes to place the camera
#------------------------------------------------------------
time.sleep(2 * 60.0)

try:
    while keep_looping:
        #----------------------------------------------------
        # Block waiting for motion detection
        #----------------------------------------------------
        GPIO.wait_for_edge(GPIO_PIR, GPIO.RISING)

        #----------------------------------------------------
        # Turn on the IR LED
        #----------------------------------------------------
        GPIO.output(GPIO_IR_LED, GPIO.HIGH)

        #----------------------------------------------------
        # Take a snap
        #----------------------------------------------------
	now = datetime.now()
	now_string = now.strftime("%y%m%d-%H%M%S")
	camera = subprocess.Popen(["raspistill", "-rot", "180", "-o", "/home/pi/Pictures/img_" + now_string + ".jpg", "-n", "-ISO", "800", "-ex", "night", "-ifx", "none"], preexec_fn =  Daemonize)

        #----------------------------------------------------
        # Turn off the IR LED after 5s
        #----------------------------------------------------
        time.sleep(5)
        GPIO.output(GPIO_IR_LED, GPIO.LOW)

        #----------------------------------------------------
        # Wait 30s before checking for motion again
        #----------------------------------------------------
        time.sleep(30.0)
except KeyboardInterrupt, e:
        pass

GPIO.cleanup()

Save it as /home/pi/hogcam.py and make it executable:

chmod 775 /home/pi/hogcam.py

Load on boot

Raspian uses systemd for running on boot.  In /etc/systemd/system, create a new file called hogcam.service:

[Unit]
Description=HogCam

[Service]
ExecStart=/home/pi/hogcam.py

[Install]
WantedBy=multi-user.target

Save it off.  You can then enable, start and stop it thus.  It will also start on boot.

sudo systemctl enable hogcam.service
sudo systemctl start hogcam.service
sudo systemctl stop hogcam.service

9 flights

For the record, here’s 9 sequential flights, with each lasting 10s: 3s ascent at 0.3m/s, 4s hoverand 3s descent at 0.3m/s.  The drift is different in each.  There are actually only 8 flights in the video; the 9th is not in the video: she never took off and lost WiFi so I had to unplug her and take her indoors.  I may post it tomorrow, though it mostly consists of me grumbling quiets while I pick her up with my arse facing the camera!

Hermione had Garmin and RaspiCam disabled – videoing the ground for lateral tracking is pointless if the height is not accurately known.

On the plus side, the new foam balls did an amazingly successful job of softening some quite high falls.

The GPS + compass plan

My intent with GPS and compass it that Hermione flies from an arbitrary take-off location to a predetermined target GPS location, oriented in the direction she’s flying.

Breaking that down into a little more detail.

  • Turn Hermione on and calibrate the compass, and wait for enough GPS satellites to be acquired.
  • Carry her to the destination landing point and capture the GPS coordinated, saving them to file.
  • Move to a random place in the open flying area and kick off the flight.
  • Before take-off, acquire the GPS coordinates of the starting point, and from that and the target coordinates, get the 3D flight direction vector
  • On takeoff, climb to 1m, and while hovering, yaw to point in the direction of the destination target vector using the compass as the only tool to give a N(X), W(Y), Up(Z) orientation vector – some account needs to be taken for magnetic north (compass) vs. true north (GPS)
  • Once done, fly towards the target, always pointing in the way she’s flying (i.e. yaw target is linked to velocity sensor input), current GPS position changing during the flight always realigning the direction target vector to the destination position.
  • On arrival at the target GPS location, she hovers for a second (i.e. braking) and decends.

There’s a lot of detail hidden in the summary above, not least the fact that GPS provides yet another feed for 3D distance and velocity vectors to be fused with the accelerometer / PiCamera / LiDAR, so I’m going to have to go through it step by step

The first is to fly a square again, but with her oriented to the next direction at the hover, and once moving to the next corner, have yaw follow the direction of movement.  Next comes compass calibration, and flight plan based upon magnetic north west and up.

However, someone’s invoked Murphy’s / Sod’s law on me again: Hermione is seeing the I2C errors again despite no hardware or software changes in this area.  Zoe is behaving better, and I’m trying to double the motion tracking by doubling the video frame rate / sampling rate for the Garmin LiDAR-Lite; the rate change is working for both, but the LiDAR readings see to be duff, reading 60cm when the flight height is less than 10cm.  Grr 🙁

One last sanity check

Before moving on to compass and GPS usage, there’s one last step I want to ensure works: lateral movement.

The flight plan is defined thus:

  • take-off in the center of a square flight plan to about 1m height
  • move left by 50cm
  • move forward by 50cm – this place her in to top left corner of the square
  • move right by 1m
  • move back by 1m
  • move left by 1m
  • move forwards by 50cm
  • move right by 50cm
  • land back at the take-off point.

The result’s not perfect despite running the ground facing camera at 640 x 640 pixels; to be honest, with lawn underneath her, I still think she did pretty well.  She’s still a little lurchy, but I think some pitch / roll rotation PID tuning over the IKEA mat should resolve this quickly.  Once again, you judge whether she achieved this 34 second flight well enough?

Zoe++

Zoe is now running my split cluster gather + process code for the RaspiCam video macro-blocks.  She has super-bright LEDs from Broadcom with ceramic heatsinks so the frame doesn’t melt and she’s running the video at 400 x 400 px at 10fps.

The results?:

And this peops, is nearly a good as it can be without more CPU cores or (heaven forbid) moving away from interpreted CPython to pre-compiled C*.  Don’t get me wrong, I can (will?) probably add minor tweaks to process compass data – the code is already collecting that; adding intentional lateral motion to the flight plan costs absolutely nothing – hover stably in a stable headwind is identical processing to intentional forwards movement in no wind.  But beyond that, I need more CPU cores without significant additional power requirements to support GPS and Scanse Sweep. I hope that’s what the A3 eventually brings.

I’ve updated everything I can on GitHub to represent the current (and perhaps final) state of play.


* That’s not quite true; PyPy is python with a just in time (JIT) compiler. Apparently, it’s the dogs’ bollocks, the mutts’ nuts, the puppies’ plums. Yet when I last tried, it was slower, probably due to the RPi.GPIO and RPIO libraries needed. To integrate those with pypy requires a lot of work which up until now has simply not been necessary.

Better macro-block interpretation

Currently, getting lateral motion from a frame full of macro-blocks is very simplistic:  find the average SAD value for a frame, and then only included those vectors whose SAD is lower.

I’m quite surprised this works as well as it does but I’m fairly sure it can be improved.  There are four factors to the content of a frame of macro-blocks.

  • yaw change: all macro-block vectors will circle around the centre of the frame
  • height change: all macro-blocks vectors will point towards or away from the centre of the frame.
  • lateral motion change: all macro-blocks vectors are pointing in the same direction in the frame.
  • noise: the whole purpose of macro-blocks is simply to find the best matching blocks between two frame; doing this with a chess set (for example) could well have any block from the first frame matching any one of the 50% of the second frame.

Given a frame of macro-blocks, yaw increment between frames can found from the gyro, and thus be removed easily.

The same goes for height too derived from LiDAR.

That leaves either noise or a lateral vector.  By then averaging these values out, we can pick the vectors that are similar to the distance / direction of the average vector. SAD doesn’t come into the matter.

This won’t be my first step however: that’s to work out why the height of the flight wasn’t anything like as stable as I’d been expecting.

Strutting her stuff

Finally, fusion worth showing.

Yes, height’s a bit variable as she doesn’t accurate height readings below about 20cm.

Yes, it’s a bit jiggery because the scale of the IMU and other sensors aren’t quite in sync.

But fundamentally, it works – nigh on zero drift for 20s.  With just the IMU, I couldn’t get this minimal level of drift for more than a few seconds.

Next steps: take her out for a longer, higher flight to really prove how well this is working.

Irritations and Innovations

No breakthroughs to report but:

  • Zoe is now running indoors safely with or without motion fusion installed
  • Without the fusion, she drifts horizontally and continues to rise during hover phase: this suggests the value for gravity at takeoff has drifted during the flight, perhaps temperature related?  It’s only about 15°C in our house currently which is outside the range she works well in.  First test is to add a blob of blue tack on the IMU so it isn’t cooled by the breeze from the props.
  • With fusion, her height is much better, but she swings laterally around her takeoff point – the Garmin LiDAR lite is doing it’s job well but there’s some tuning required for the lateral motion from the Raspberry Camera.  Also it’s dark in the play room, even with the lighting on, so I’m going to add LED lighting under her motors to give the camera better site.  She’s flying over an IKEA LEKPLATS play mat, but ours seems very faded, so I’ll be getting her a new one.
  • I’ve added a whole bunch of safety trip wires so that, for example, if she’s 50cm above where the flight plan says she should be, the flight dies.  Together these make her much safer for flights indoors.
  • I’ve added enhanced scheduling to prioritise IMU over camera input when the IMU FIFO is reading half-full; this is to prevent FIFO overflows as camera processing sometimes takes a while, and the overflows have been happening a lot recently.
  • I’ve also added another couple of pairs of PIDs – I’m not sure how I got away without them before.  The equivalent controls yaw perfectly, but the pitch and roll angles were missing, skipping straight to the rotation rates instead.
    • distance (target – input) =PID=> corrective velocity target
    • velocity (target – input) =PID=> corrective acceleration target
    • acceleration target => angular target (maths to choose an angle for a desired acceleration)
    • angle (target – input) =PID=> corrective rotation target
    • rotation (target – input) =PID=> PWM output

Together all these changes require a lot of tuning, tinkering and testing; I hope to report back with a video when there’s something worth sharing.

Salad bowl

Nothing exciting to report I’m afraid – still waiting for the rain to stop to take the girls out.  So I thought I’d show you what they look like.  Both are now kitted out with LiDAR and RaspiCam for vertical and lateral distance sampling.  They run the same software – the code enables 8 ESC objects if the hostname is hermione.local or 4 otherwise:

Salad bowl

Salad bowl

As you may have guess, Hermione’s lid is a Salad Bowl from Italy – I spotted one we had and it fitted perfectly, so I got myself an orange one – £5 + £15 shipping to avoid the Italian postal service stealing it in transit.  The bowl is concave on the underside (in its role as a salad bowl).  Luckily, it’s 10cm diameter matching some black acrylic disks I have (one of which is on Hermione’s underside to which attaches the Garmin LiDAR Lite and the0 RaspiCam.  Ultimately, the Scanse Sweep will be attached to this top black disc.

On that train of thought, I need to get a move on as the Scanse Sweep could well arrive before Christmas according to the latest update note.