Live Cold Fusion Test

I took Zoe outside (temperature 0°C – freezing point) to fly this morning with fusion enabled to see what the effect was – fusion was disabled in the flights; they was purely for compare and contrast of the two independent sensor sources.

Fusion was a complementary filter with the -3dB crossover set to 1s.

In all graphs,

  • blue comes from the Garmin LiDAR (Z axis) or Camera (X and Y axes)
  • orange comes from the accelerometer with necessary subtraction of gravity and integration.
  • grey is the fused value – orange works short term with longer term fusion with blue.
Fusion

Fusion

In general, the shapes match closely, but there’s some oddities I need to understand better:

  • horizontal velocity from the Camera is very spiky – the average is right, and the fusion is hiding the spikes well – I’m assuming the problem is my code coping with the change of tilt of the camera compared to the ground.
  •   the vertical height is wrong – at 3 seconds, Zoe should be at 90cm and leveling out.

I need to continue dissecting these stats – more anon.

Testing distances

I took Hermione out for a quick flight again in X8 format with her monstrous new LiPo; the I2C I/O error kicked in just before 3.5 seconds, but not before she’d just acquired hover state.  The test was driven by just the IMU, but with the Camera and Garmin LiDAR-Lite V3 tracking lateral and vertical distances, and I’m pretty impressed.  Compare and contrast these two graphs:

Distance tracking

Distance tracking

Distance tracking

Distance tracking

The Garmin height is accurate – the scale difference is due to the accelerometer reading just over 1g on the ground, so based on that and the double integration to get distance, Hermione thinks she’s higher than she is.

The Camera horizontal is pretty good too – I probably just need to turn the gain down a touch.

This bodes well for tomorrow when the capacitors arrive (thanks for the nudge in the right direction, Jean) which should decouple the Garmin power-lines from the I2C lines, and thereby (hopefully) kill the I2C errors.

3 meter square

I added some filtering to macroblock vector output, including only those with individual SAD values less than half the overall SAD average for that frame.  I then took the rig for a walk around a 3m square (measured fairly accurately), held approximately 1m above the ground.  The graph goes anti-clockwise horizontally from 0,0.  The height probably descended a little towards the end hence the overrun from >1000 to <0 vertically.

3m square walk

3m square walk

This is probably pretty perfect and more than good enough to work out the scale from macroblock vectors to meters:

1000 (vectors) = 3 (meters) @ 1m height or

meters = (0.003 * height) vectors

Previously, I’d tried to calculate the macroblock vector to meter scale as the following where macroblock per frame are lengths in pixels:

meters = (2 * height * tan(lens angle / 2) / macroblocks per frame) vectors 
meters = (2 * height * tan(48.4 / 2) / (400 / 16) vectors
meters = (0.036 * height) vectors

Clearly my mathematical guess was wrong by a factor of roughly 12.  The measurement errors for the test run were < 3% horizontally, and perhaps 5% vertically so they don’t explain the discrepancy.

I needed to investigate further how the vector values scale per frame size.  So I upped the video frame from 400 x 400 pixels to 600 x 600 pixels:

(3m @ 600 pixels)²

(3m @ 600 pixels)²

The 1000 x 1000 vector square has now turned nicely into a 1500 x 1500 vector square corresponding to the 400 x 400 to 600 x 600 frame size change.  So again,

1500 (vectors) = 3 (meters) @ 1m height or

meters = (0.002 * height) vectors

So unsurprizingly, the macroblock vectors are inversely proportional to the frame size:

meters = (scale * height / frame width) * vectors
meters = (1.2 * height / frame width) * vectors

But what’s this 1.2 represent?  I’ve done some thinking, but to no avail directly, but I think I’ve found away that I don’t need to know.  That’ll be my next post.

 

 

Goal!

The paint has hardly dried on the raspivid solution for unbuffered macro-block streaming before a conversation on the Raspberry Pi forums yielded the way to get picamera to stream the video macro-block without buffering by explicitly opening an unbuffered output file rather than leaving it to picamera to second guess what’s needed.

Cutting to the chase, here’s the result.

Down-facing RaspiCam stabilized Zoe flight from Andy Baker on Vimeo.

Yes she drifted forward into the football goal over the course of the 20 seconds,  but that’s 4 times longer to drift that far so frankly, I’m stunned how good the flight was!  I have ideas as to why, and these are my next post.

And the best bit? No damage was done to my son’s goal net!

 

Zoe the videographer

Zoe the videographer

Zoe the videographer

You can just see the brown video cable looping to the underside of the frame where the camera is attached.

She’s standing on a perspex box as part of some experimenting as to why this happens:

Lazy video feed

Lazy video feed

It’s taking at least 4.5 seconds before the video feed kicks in (if at all).  Here the video data is only logged.  What’s plotted here is the cumulative distance; what’s shown is accurate in time and space, but I need to investigate further why the delay.  It’s definitely not to do with the starting up of the camera video process – I already have prints showing when it starts and stops, and those happen at the right time; it’s either related to the camera processing itself, or how the FIFO works.  More anon as I test my ideas.

Hell freezes over

Back from DisneyLand where it was 35°C in the shade.  It actually turned out to be fun, even cool at times, and gave me plenty of thinking time, the net result of which is I’ve changed the main scheduling loop which now

  • polls the IMU FIFO to check how many batches of sensor data are queued up there; the motion processing runs every 10 batches
  • if the IMU FIFO has less than 10 batches, select.select() is called, listening on the OS FIFO of the camera macro-block collection process; the timeout for the select.select() is based upon the IMU sampling rate, and the number of IMU FIFO batches required to reach 10.
  • The select.select() wakes either because
    • there are now >=10 batches of IMU FIFO data present, triggering motion processing
    • there are macro-block data on the OS FIFO, which updates the lateral PID distance and velocity input.

Even without the camera in use, this improves the scheduling because now the motion processing happens every 10 batches of IMU data, and it doesn’t use time.sleep() whose timing resulted in significant variation in the number of IMU FIFO batches triggering motion processing.

I’m taking this integration carefully step by step because an error could lead to disastrous, hard to diagnose behaviour.  Currently the camera FIFO results are not integrated with the motion processing, but instead are just logged.  I hope during the next few days I can get this all integrated.

Note that due to some delivery problems, this is all being carried out on Zoe with her version 2 PiZero.


Update: Initial testing suggests a priority problem: motion processing is now taking nearly 10ms means the code doesn’t reach the select.select() call, but instead simply loops on motion processing. This means that when finally the OS FIFO of macro-blocks gets read, there are possibly several sets, and they are backed up and out-of-date. I’ll change the scheduling to prioritize reading OS FIFO and allow the IMU FIFO to accumulate more samples.

DisneyLand Paris :-(

We’re off on holiday tomorrow, so I’m leaving myself this note to record the state of play: the new A+ 512MB RAM is overclocked to 1GHz setup with Chloe’s SD card running March Jessie renamed Hermione.  New PCBs have arrives and one is made up.  The new PCB is installed and has passed basic testing of I2C and PWM

To do:

  • install LEDDAR and Pi Camera onto the underside
  • update python picamera to the new version
  • test motion.py on hermione
  • merge motion.py with X8.py
  • work out why udhcpd still doesn’t work on May Jessie-lite (different SD card)

Camera motion tracking code

I’ve reworked the code so that the video is collected in a daemonized process and fed into a shared memory FIFO. The main code for processing the camera output is now organised to be easily integrated into the quadcopter code. This’ll happen for Hermione once I’ve built her with a new PCB and 512MB memory A+. I suspect the processing overhead of this is very light, given that most of the video processing happens in the GPU, and the motion processing is some very simple averaging.

#!/usr/bin/python
from __future__ import division
import os
import sys
import picamera
import select
import struct
import subprocess
import signal
import time

####################################################################################################
#
# Motion.py - motion tracker based upon video-frame macro-blocks.  Note this happens per flight - 
#             nothing is carried between flights when this is merged with the quadcopter code.
#
####################################################################################################

#--------------------------------------------------------------------------------------------------
# Video at 10fps. Each frame is 320 x 320 pixels.  Each macro-block is 16 x 16 pixels.  Due to an 
# extra column of macro-blocks (dunno why), that means each frame breaks down into 21 columns by 
# 20 rows = 420 macro-blocks, each of which is 4 bytes - 1 signed byte X, 1 signed byte Y and 2 unsigned
# bytes SAD (sum of absolute differences). 
#--------------------------------------------------------------------------------------------------
def RecordVideo():
    print "Video process: started"
    with picamera.PiCamera() as camera:
        camera.resolution = (320, 320)
        camera.framerate = 10

        camera.start_recording('/dev/null', format='h264', motion_output="/dev/shm/motion_stream", quality=23)

        try:
            while True:
                camera.wait_recording(1.0)
        except KeyboardInterrupt:
            pass
        finally:            
            try:
                camera.stop_recording()
            except IOError:
                pass
    print "Video process: stopped"

#---------------------------------------------------------------------------------------------------
# Check if I am the video process            
#---------------------------------------------------------------------------------------------------
if len(sys.argv) > 1 and sys.argv[1] == "video":
    RecordVideo()
    sys.exit()

#---------------------------------------------------------------------------------------------------
# Setup a shared memory based data stream for the PiCamera video motion output
#---------------------------------------------------------------------------------------------------
os.mkfifo("/dev/shm/motion_stream")

#---------------------------------------------------------------------------------------------------
# Start up the video camera as a new process. Run it in its own process group so that Ctrl-C doesn't
# get through.
#---------------------------------------------------------------------------------------------------
def Daemonize():
    os.setpgrp()
video = subprocess.Popen(["python", "motion.py", "video"], preexec_fn =  Daemonize)

#---------------------------------------------------------------------------------------------------
# Off we go; set up the format for parsing a frame of macro blocks
#---------------------------------------------------------------------------------------------------
format = '=' + 'bbH' * int(1680 / 4)

#---------------------------------------------------------------------------------------------------
# Wait until we can open the FIFO from the video process
#---------------------------------------------------------------------------------------------------
camera_installed = True
read_list = []
write_list = []
exception_list = []

if camera_installed:
    while True:
        try:
            py_fifo = open("/dev/shm/motion_stream", "rb")
        except:
            continue
        else:
            break
    print "Main process: fifo opened"
    read_list = [py_fifo]


motion_data = open("motion_data.csv", "w")
motion_data.write("idx, idy, adx, ady, sad\n")

total_bytes = 0
frame_rate = 10
scale = 10000

#---------------------------------------------------------------------------------------------------
# Per frame distance and velocity increments
#---------------------------------------------------------------------------------------------------
ivx = 0.0
ivy = 0.0
idx = 0.0
idy = 0.0

#---------------------------------------------------------------------------------------------------
# Per flight absolute distance and velocity integrals
#---------------------------------------------------------------------------------------------------
avx = 0.0
avy = 0.0
adx = 0.0
ady = 0.0

start_time = time.time()

try:
    while True:
        #-------------------------------------------------------------------------------------------
        # The sleep time for select is defined by how many batches of data are sitting in the FIFO
        # compared to how many we want to process per motion processing (samples per motion)
        #
        # timeout = (samples_per_motion - mpu6050.numFIFOSamles()) / sampling_rate
        # check for negative timeout.
        #-------------------------------------------------------------------------------------------

        #-------------------------------------------------------------------------------------------
        # Wait for the next whole frame
        #-------------------------------------------------------------------------------------------
        read_out, write_out, exception_out = select.select(read_list, write_list, exception_list)

        if camera_installed and len(read_out) != 0:
            #---------------------------------------------------------------------------------------
            # We have new data on the video FIFO; get it, and make sure it really is a whole frame of 
            # 21 columns x 20 rows x 4 bytes for macro block.
            #---------------------------------------------------------------------------------------
            frame = py_fifo.read(1680)
            if len(frame) != 1680:
                print "ERROR: incomplete frame received"
                break

            #---------------------------------------------------------------------------------------
            # Convert to byte, byte, ushort of x, y, sad
            #---------------------------------------------------------------------------------------
            iframe = struct.unpack(format, frame)

            #---------------------------------------------------------------------------------------
            # Iterate through the 21 x 20 macro blocks averaging the X and Y vectors of the frame based
            # upon the SAD (sum of absolute differences, lower is better).  
            #---------------------------------------------------------------------------------------
            ivx = 0.0
            ivy = 0.0
            sad = 0

            for ii in range(0, 420 * 3, 3):
                ivy += iframe[ii]
                ivx += iframe[ii + 1]
                sad += iframe[ii + 2]

            #---------------------------------------------------------------------------------------
            # Scale the macro block values to the speed increment in meters per second
            #---------------------------------------------------------------------------------------
            ivx /= scale
            ivy /= scale     

            #---------------------------------------------------------------------------------------
            # Use the frame rate to convert velocity increment to distance increment
            #---------------------------------------------------------------------------------------
            idt = 1 / frame_rate
            idx = ivx * idt
            idy = ivy * idt

            #---------------------------------------------------------------------------------------
            # Integrate (sum due to fixed frame rate) the increments to produce total distance and velocity.
            # Note that when producing the diagnostic earth frame distance, it's the increment in distance
            # that's rotated and added to the total ed*
            #---------------------------------------------------------------------------------------
            avx += ivx
            avy += ivy
            adx += idx
            ady += idy

            time_stamp = time.time() - start_time

            motion_data.write("%f, %f, %f, %f, %f, %d\n" % (time_stamp, idx, idy, adx, ady, sad))

except:
    pass


#---------------------------------------------------------------------------------------------------
# Stop the video process
#---------------------------------------------------------------------------------------------------
video.send_signal(signal.SIGINT)

motion_data.flush()
motion_data.close()

py_fifo.close()
os.unlink("/dev/shm/motion_stream")

 

Kitty++

I mentioned a year ago using the Raspberry Pi Camera to provide motion data; this motion data is a bi-product of the compression of h.264 video frames.

Each frame in a video is broken up into 16 x 16 pixel blocks (macro-blocks) which are compared to the blocks from the previous frame to find nearest matches.  Then an X, Y vectors is produced for each macro-blocks for the best matching block in the previous frame along with a score of absolute differences (SAD) – essentially a score of trustworthyness of the vector.  This happens at the frame rate for the video as part of the h.264 video compression algorithm, and is produced by the GPU alone.

Because of the fixed frame rate, these are effectively velocity vectors, and because the camera would be strapped under the quadcopter body, the velocities are in the quadcopter reference frame.  A crude processing of this data would simply be to average all the X,Y vectors based upon each SAD values to come up with a best guess estimate of an overall linear vector per frame.  Better processing would need to accommodate yaw also.

All this function is now available from the Python picamera interface making it very easy to create a motion tracking process which feeds the macro-block vectors + SAD data to the HoG code which can do the averaging and produce the quadframe velocity PID inputs for the X and Y axes.  The velocity PID targets are set as now to be earth-frame speeds reorientated to the quad-frame.

Advantages

  • frame speed and processing on the GPU
  • no integration of (accelerometer – subtraction of gravity rotated to the quadframe) to get velocity hence no cumulative error from offsets

Disadvantages

  • need some height information – possibly just best estimate as now

This opens up the possibility of indoor and outdoor motion processing over varying contrast surfaces, or laser tracking over surfaces without significant contrast difference.

First step is to adapt the existing Kitty code to churn out these motion vectors and stream them to a dummy module to produce the resultant velocity PID inputs.  More thoughts anon.

Kitty’s measuring up

Did some simple testing to determine the camera angle, resolution and suitable dual laser dot separation.  Kitty was sat on a stool, looking up at the ceiling, and I was waving the laser around the ceiling watching the curses display.  The picture below is upside down!

           U______________
           ∧      ^      ^            
          /|\   21cm     |
         / | \____V      |
        /  |  \        205cm 
       / α |   \         |
      /    |    \        |
     /     |     \       |
    /______|______\______V
   |<----170cm---->|

U = camera position
Span of kitty laser dot detection = 170 x 170cm
Test height (camera to ceiling) = 205cm
Alpha = atan(170 / (2 x 205) = 22.5°

In addition…

Camera height at takeoff = 21cm
∴Camera span at takeoff = 17.4 x 17.4cm
Image size = 32 x 32 pixel
∴Image resolution at takeoff = 0.5 x 0.5cm
Landing leg separation = 23cm

Based on the above, 2 dots spaced by 15cm would make a good laser leash

Image span at 1m hover = 82 x 82cm
Double dot separation viewed from 1m = 15cm / 82cm x 32 pixels ≅ 6 pixels

So minimum capture resolution of 32 x 32 pixels should be fine for 1m hover, but by 2m, the risk of merged dots is too high as they’ll only be 3 dots apart.