Hermione’s I/O errors

Rather than spending ages trying to work out how to get rid of Hermione’s electronic errors, I’ve opted for a much simpler solution of powering her HoG from a 5V 2.1A battery bank.  I used to do this in the olden days, and only swappend to LiPo + regulator when I needed space for the LiDAR and camera.  But Hermione has plenty of spare space for both.  Here’s what happened:

She was doing pretty well tracking against the lawn in windy conditions, right up to the point she went nuts and zoomed off into the sky. Shortly after, the LiDAR detected she was higher than the flight plan said she should be and the killed the flight.  Once she vanishes from the shot, watch for her reflection in the window as she falls down to earth.  She landed in a soft flower bed but that still resulted in two cut motor wires.  No I/O error happened (good), but this type of radical error normally accompanies a FIFO overflow, and one wasn’t reported either.  That leaves me stuck having to keep risking these dodgy flights until I can track down the cause.

30s with, <10s without

A few test runs.  In summary, with the LiDAR and Camera fused with the IMU, Zoe stays over her play mat at a controlled height for the length of the 30s flight.  Without the fusion, she lasted just a few seconds before she drifted off the mat, lost her height, or headed to me with menace (kill ensued).  I think that’s pretty conclusive code fusion works!

With Fusion:

Without Fusion:

Strutting her stuff

Finally, fusion worth showing.

Yes, height’s a bit variable as she doesn’t accurate height readings below about 20cm.

Yes, it’s a bit jiggery because the scale of the IMU and other sensors aren’t quite in sync.

But fundamentally, it works – nigh on zero drift for 20s.  With just the IMU, I couldn’t get this minimal level of drift for more than a few seconds.

Next steps: take her out for a longer, higher flight to really prove how well this is working.

Goal!

The paint has hardly dried on the raspivid solution for unbuffered macro-block streaming before a conversation on the Raspberry Pi forums yielded the way to get picamera to stream the video macro-block without buffering by explicitly opening an unbuffered output file rather than leaving it to picamera to second guess what’s needed.

Cutting to the chase, here’s the result.

Down-facing RaspiCam stabilized Zoe flight from Andy Baker on Vimeo.

Yes she drifted forward into the football goal over the course of the 20 seconds,  but that’s 4 times longer to drift that far so frankly, I’m stunned how good the flight was!  I have ideas as to why, and these are my next post.

And the best bit? No damage was done to my son’s goal net!

 

Camera motion tracking code

I’ve reworked the code so that the video is collected in a daemonized process and fed into a shared memory FIFO. The main code for processing the camera output is now organised to be easily integrated into the quadcopter code. This’ll happen for Hermione once I’ve built her with a new PCB and 512MB memory A+. I suspect the processing overhead of this is very light, given that most of the video processing happens in the GPU, and the motion processing is some very simple averaging.

#!/usr/bin/python
from __future__ import division
import os
import sys
import picamera
import select
import struct
import subprocess
import signal
import time

####################################################################################################
#
# Motion.py - motion tracker based upon video-frame macro-blocks.  Note this happens per flight - 
#             nothing is carried between flights when this is merged with the quadcopter code.
#
####################################################################################################

#--------------------------------------------------------------------------------------------------
# Video at 10fps. Each frame is 320 x 320 pixels.  Each macro-block is 16 x 16 pixels.  Due to an 
# extra column of macro-blocks (dunno why), that means each frame breaks down into 21 columns by 
# 20 rows = 420 macro-blocks, each of which is 4 bytes - 1 signed byte X, 1 signed byte Y and 2 unsigned
# bytes SAD (sum of absolute differences). 
#--------------------------------------------------------------------------------------------------
def RecordVideo():
    print "Video process: started"
    with picamera.PiCamera() as camera:
        camera.resolution = (320, 320)
        camera.framerate = 10

        camera.start_recording('/dev/null', format='h264', motion_output="/dev/shm/motion_stream", quality=23)

        try:
            while True:
                camera.wait_recording(1.0)
        except KeyboardInterrupt:
            pass
        finally:            
            try:
                camera.stop_recording()
            except IOError:
                pass
    print "Video process: stopped"

#---------------------------------------------------------------------------------------------------
# Check if I am the video process            
#---------------------------------------------------------------------------------------------------
if len(sys.argv) > 1 and sys.argv[1] == "video":
    RecordVideo()
    sys.exit()

#---------------------------------------------------------------------------------------------------
# Setup a shared memory based data stream for the PiCamera video motion output
#---------------------------------------------------------------------------------------------------
os.mkfifo("/dev/shm/motion_stream")

#---------------------------------------------------------------------------------------------------
# Start up the video camera as a new process. Run it in its own process group so that Ctrl-C doesn't
# get through.
#---------------------------------------------------------------------------------------------------
def Daemonize():
    os.setpgrp()
video = subprocess.Popen(["python", "motion.py", "video"], preexec_fn =  Daemonize)

#---------------------------------------------------------------------------------------------------
# Off we go; set up the format for parsing a frame of macro blocks
#---------------------------------------------------------------------------------------------------
format = '=' + 'bbH' * int(1680 / 4)

#---------------------------------------------------------------------------------------------------
# Wait until we can open the FIFO from the video process
#---------------------------------------------------------------------------------------------------
camera_installed = True
read_list = []
write_list = []
exception_list = []

if camera_installed:
    while True:
        try:
            py_fifo = open("/dev/shm/motion_stream", "rb")
        except:
            continue
        else:
            break
    print "Main process: fifo opened"
    read_list = [py_fifo]


motion_data = open("motion_data.csv", "w")
motion_data.write("idx, idy, adx, ady, sad\n")

total_bytes = 0
frame_rate = 10
scale = 10000

#---------------------------------------------------------------------------------------------------
# Per frame distance and velocity increments
#---------------------------------------------------------------------------------------------------
ivx = 0.0
ivy = 0.0
idx = 0.0
idy = 0.0

#---------------------------------------------------------------------------------------------------
# Per flight absolute distance and velocity integrals
#---------------------------------------------------------------------------------------------------
avx = 0.0
avy = 0.0
adx = 0.0
ady = 0.0

start_time = time.time()

try:
    while True:
        #-------------------------------------------------------------------------------------------
        # The sleep time for select is defined by how many batches of data are sitting in the FIFO
        # compared to how many we want to process per motion processing (samples per motion)
        #
        # timeout = (samples_per_motion - mpu6050.numFIFOSamles()) / sampling_rate
        # check for negative timeout.
        #-------------------------------------------------------------------------------------------

        #-------------------------------------------------------------------------------------------
        # Wait for the next whole frame
        #-------------------------------------------------------------------------------------------
        read_out, write_out, exception_out = select.select(read_list, write_list, exception_list)

        if camera_installed and len(read_out) != 0:
            #---------------------------------------------------------------------------------------
            # We have new data on the video FIFO; get it, and make sure it really is a whole frame of 
            # 21 columns x 20 rows x 4 bytes for macro block.
            #---------------------------------------------------------------------------------------
            frame = py_fifo.read(1680)
            if len(frame) != 1680:
                print "ERROR: incomplete frame received"
                break

            #---------------------------------------------------------------------------------------
            # Convert to byte, byte, ushort of x, y, sad
            #---------------------------------------------------------------------------------------
            iframe = struct.unpack(format, frame)

            #---------------------------------------------------------------------------------------
            # Iterate through the 21 x 20 macro blocks averaging the X and Y vectors of the frame based
            # upon the SAD (sum of absolute differences, lower is better).  
            #---------------------------------------------------------------------------------------
            ivx = 0.0
            ivy = 0.0
            sad = 0

            for ii in range(0, 420 * 3, 3):
                ivy += iframe[ii]
                ivx += iframe[ii + 1]
                sad += iframe[ii + 2]

            #---------------------------------------------------------------------------------------
            # Scale the macro block values to the speed increment in meters per second
            #---------------------------------------------------------------------------------------
            ivx /= scale
            ivy /= scale     

            #---------------------------------------------------------------------------------------
            # Use the frame rate to convert velocity increment to distance increment
            #---------------------------------------------------------------------------------------
            idt = 1 / frame_rate
            idx = ivx * idt
            idy = ivy * idt

            #---------------------------------------------------------------------------------------
            # Integrate (sum due to fixed frame rate) the increments to produce total distance and velocity.
            # Note that when producing the diagnostic earth frame distance, it's the increment in distance
            # that's rotated and added to the total ed*
            #---------------------------------------------------------------------------------------
            avx += ivx
            avy += ivy
            adx += idx
            ady += idy

            time_stamp = time.time() - start_time

            motion_data.write("%f, %f, %f, %f, %f, %d\n" % (time_stamp, idx, idy, adx, ady, sad))

except:
    pass


#---------------------------------------------------------------------------------------------------
# Stop the video process
#---------------------------------------------------------------------------------------------------
video.send_signal(signal.SIGINT)

motion_data.flush()
motion_data.close()

py_fifo.close()
os.unlink("/dev/shm/motion_stream")

 

Crude camera calibration

Based upon the walk graphed in the previous post, the measured distanced in the garden, and a crude guesstimation that I was carrying the test rig about 1m off the ground as I walked the walk, the macro-block output is about 87,500 pixels per meter height per meter distance or

horizontal distance = height * macro-block output / 87500

For the moment, that’s more than accurate enough.

A few things left to do before this code can be used in Hermione

  1. Currently, the video and macro block processing run in separate threads, connected by a shared memory FIFO; ultimately this needs splitting into into two processes which could then run on separate CPUs should a multi-core A3 appear.
  2. There’s some work to integrate the select.select() waiting on the FIFO into the main quadcopter scheduling loop – it replaces the current time.sleep() with listening on the FIFO with a timeout of the time.sleep() value.
  3. At the gory details level, I need to make sure the video and IMU X and Y axes are aligned when the new PSBs arrive.

Shouldn’t take too long except for the minor detail I’m off to DisneyLand Paris next week 🙁

Better macro-block tracking

With a more careful test this morning, here’s what I got.

Video macro-block tracking

Video macro-block tracking

This is at least as good as the PX4FLOW, so that’s now shelved, and testing will continue with the Raspberry Pi Camera.  I upgraded the camera to the new version for this test, as that what I’ll be installing on Hermione.

The cross of the diagonal and vertical should have happened lower so that the diagnonals to the right of the vertical overlapped – these are the exit from and reentry to the house from the garden.  There are multiple possible reasons for this, and because this is now my code, I can play to resolve the offset; something I simply couldn’t do with PX4FLOW and its offsets.

Next step is to sort out the units, including which direction the camera X and Y are facing in comparison with how Hermione’s X and Y from the accelerometer and gyro are facing.

What’s better than a PX4FLOW?

The Raspberry Pi camera of course:

RaspiCam Video motion blocks

RaspiCam Video motion blocks

A very similar walk around the garden as before, but running the Raspberry Pi camera, ground facing, videoing at 10 frames per second at 320 x 320 resolution, producing 16 x 16 macro-blocks per frame, which are averaged per frame and logged.

The macro blocks give the pixel shift between one frame and the next to help with the frame compression; I’m not sure whether the units are in pixels or macro blocks, but that’s simple to resolve.  Combined with the height from the LEDDAR, and the focal length of the lens, it’ll be trivial to convert these readings to a distance in meters.

The results here are at least as good as the PX4FLOW, if not better, and the processing of the macro-blocks to distance is very lightweight.

This is definitely worth pursuing as it’s much more in keeping with how I want this to work.  The PX4FLOW has served its purpose well in that with my understanding how it worked, it opened up how it could be replaced with the RPi Camera.

There are further bonuses too: because of the video fixed frame rate, the macro blocks are producing distance increments, whereas the PX4FLOW only produced velocities, and that means I can add in horizontal distance PIDs to kill drift and ensure the quad always hovers over the same spot.  And even better, I’m no longer gated on the arrival of the new PCBs: these were required for X8 and I2C for the PX4FLOW; I’ll need them eventually for X8 but for now, the current PCB provides everything I need.

We are most definitely on a roll again!

Progress report

Here’s Chloe’s HoG in Hermione’s frame.  You can see I’ve put a large WiFi antenna on one of the side platforms; the other is for GPS in the future.  The frame itself is not quite complete – I still need to install a platform on the underside to hang the sensors off.  In addition, the LID (LiDAR Installation Desktop) needs assembling – it’s just here for show currently.

Chloe's HoG in Hermione's frame

Chloe’s HoG in Hermione’s frame

Here’s a flight with just accelerometer and gyro in control for basic stability testing.

With these 1340 high pitch Chinese CF props, there’s no shortage of lift power despite the fact she weighs 2.8kg, so I’m going to defer the X8 format for a while on financial grounds – 4 new T-motor U3 motors and 40A ESCs costs nearly £400.

The PCBs are on order, and first setup will be for LEDDAR and PX4FLOW.

Oddly, only one of my PX4FLOWs works properly – for some reason, the newer one can’t see the URF, so can’t provide velocities, only angular shift rates; however, LEDDAR will give me the height allowing me to convert the angular rates to horizontal velocities.  If that works, that also opens up the possibility of replacing the PX4FLOW with a Raspi Camera using the H.264 video macro block increments to allow me to do the equivalent of the PX4FLOW motion processing myself, which if possible, would please me greatly.

Still lurking in the background is getting the compass working to overcome the huge yaw you can see in the video.