Goal!

The paint has hardly dried on the raspivid solution for unbuffered macro-block streaming before a conversation on the Raspberry Pi forums yielded the way to get picamera to stream the video macro-block without buffering by explicitly opening an unbuffered output file rather than leaving it to picamera to second guess what’s needed.

Cutting to the chase, here’s the result.

Down-facing RaspiCam stabilized Zoe flight from Andy Baker on Vimeo.

Yes she drifted forward into the football goal over the course of the 20 seconds,  but that’s 4 times longer to drift that far so frankly, I’m stunned how good the flight was!  I have ideas as to why, and these are my next post.

And the best bit? No damage was done to my son’s goal net!

 

Motion tracking is now working

Courtesy of a discussion with 6by9 on the Raspberry Pi forum, the motion tracking for Zoe is now working using raspivid which has an option to not buffer the macro-block output.  Here’s a plot of a passive flight where she moves forward and back (X) a couple of times, and then left and right (Y).

Macro-block vs accelerometer

Macro-block vs accelerometer

The plot clearly shows the macro-blocks and accelerometer X and Y readings are in sync (if not quite at the same scale), so tomorrow’s the day to set Zoe loose in the garden with the motors powered up – fingers crossed no cartwheels across the lawn this time!

Video buffering

I flew Zoe over the weekend with the camera motion running, and it was pretty exiting watching her cartwheel across the lawn – clearly there’s more to do in the testing before I try again!

So I did a passive flight in the ‘lab’ just now and got these stats; I had hoped to show a comparison of the accelerometer measured distances vs the camera video distances, but that’s not what I got:

Motion stats

Motion stats

The lower 2 graphs are the interesting ones: the left one shows how bad the integration error is with the accelerometer – all the details in the accelerometer are swamped by integrated offset errors.  It also shows that we are only getting data from the video every four seconds.

So I did a test with my raw motion code (i.e. no overheads from the other sensors in the quadcopter code), and it showed those 4 second batches contain 39 samples, so clearly there’s some buffering of the 10 Hz video frame rate as configured.

So next step is to work out how to identify whether it’s the FIFO or the video that’s doing the buffering, and how to stop it!

 

Camera tracking is working, honest!

Working camera tracking

Working camera tracking

These two show the motion tracking results, and the motion processing loop intervals.

Each spike in the dt (time interval) graph happens at about 10Hz (the frame rate for the video) and shows the camera data processing; this is only triggered when there is camera motion data available, not simply based on a 10Hz timer.  As you can see this is there from the start.  The horizontal units here are the number of motion processing loops i.e. the spikes start right from the word go, not after 4 seconds as suggested by the top graph.

The silence in the top graph’s first 4 second thus suggests the camera motion-blocks simply are not detecting motion at this point..  The flight plan is a 2 seconds take-off, 12 second hover and 2 second descent, but I’ve had to abort it after 5 seconds due to instability: Zoe is flapping both around pitch (blue) and roll (orange).  The instability grows during the flight, so the 4 second point is likely to be where the low resolution camera macro-blocks actually start to see the instability.

Currently, the camera motion processing is still not fed into the PIDs, so the flapping is not caused by the camera motion.  Certainly, the next step is to include this in the PIDs and see if this actually works.

Finally, there’s one fly in the ointment: the IMU FIFO overflow triggers every other flight; this is almost certainly not due directly to the camera itself, but to how I’m managing the FIFO and it’s overflow interrupt; first couple of attempts to control this have failed, so I’ll have to keep stabbing in the dark.

Zoe the videographer

Zoe the videographer

Zoe the videographer

You can just see the brown video cable looping to the underside of the frame where the camera is attached.

She’s standing on a perspex box as part of some experimenting as to why this happens:

Lazy video feed

Lazy video feed

It’s taking at least 4.5 seconds before the video feed kicks in (if at all).  Here the video data is only logged.  What’s plotted here is the cumulative distance; what’s shown is accurate in time and space, but I need to investigate further why the delay.  It’s definitely not to do with the starting up of the camera video process – I already have prints showing when it starts and stops, and those happen at the right time; it’s either related to the camera processing itself, or how the FIFO works.  More anon as I test my ideas.

Hell freezes over

Back from DisneyLand where it was 35°C in the shade.  It actually turned out to be fun, even cool at times, and gave me plenty of thinking time, the net result of which is I’ve changed the main scheduling loop which now

  • polls the IMU FIFO to check how many batches of sensor data are queued up there; the motion processing runs every 10 batches
  • if the IMU FIFO has less than 10 batches, select.select() is called, listening on the OS FIFO of the camera macro-block collection process; the timeout for the select.select() is based upon the IMU sampling rate, and the number of IMU FIFO batches required to reach 10.
  • The select.select() wakes either because
    • there are now >=10 batches of IMU FIFO data present, triggering motion processing
    • there are macro-block data on the OS FIFO, which updates the lateral PID distance and velocity input.

Even without the camera in use, this improves the scheduling because now the motion processing happens every 10 batches of IMU data, and it doesn’t use time.sleep() whose timing resulted in significant variation in the number of IMU FIFO batches triggering motion processing.

I’m taking this integration carefully step by step because an error could lead to disastrous, hard to diagnose behaviour.  Currently the camera FIFO results are not integrated with the motion processing, but instead are just logged.  I hope during the next few days I can get this all integrated.

Note that due to some delivery problems, this is all being carried out on Zoe with her version 2 PiZero.


Update: Initial testing suggests a priority problem: motion processing is now taking nearly 10ms means the code doesn’t reach the select.select() call, but instead simply loops on motion processing. This means that when finally the OS FIFO of macro-blocks gets read, there are possibly several sets, and they are backed up and out-of-date. I’ll change the scheduling to prioritize reading OS FIFO and allow the IMU FIFO to accumulate more samples.

DisneyLand Paris :-(

We’re off on holiday tomorrow, so I’m leaving myself this note to record the state of play: the new A+ 512MB RAM is overclocked to 1GHz setup with Chloe’s SD card running March Jessie renamed Hermione.  New PCBs have arrives and one is made up.  The new PCB is installed and has passed basic testing of I2C and PWM

To do:

  • install LEDDAR and Pi Camera onto the underside
  • update python picamera to the new version
  • test motion.py on hermione
  • merge motion.py with X8.py
  • work out why udhcpd still doesn’t work on May Jessie-lite (different SD card)

Tyranosaurus

Tyranosaurus

Tyranosaurus

Another walk up the side of the house, but then walking a square as best I could, finishing where I started, and as you can see, the camera tracked this amazingly well – I’m particularly delighted the start and end points of the square are so close.  Units are pretty accurate too.

I’m now very keen for Hermione’s parts to arrive, as I suspect this is going to work like a dream, both stabilising long term hover, and also allowing accurate traced flight plans with horizontal movement.  Very, very excited!

Shame about the trip to DisneyLand Paris next week – I’m not going to get everything done before then, which means Disney is going to be more of a frustrating, annoying waste of my time than usual!

Camera motion tracking code

I’ve reworked the code so that the video is collected in a daemonized process and fed into a shared memory FIFO. The main code for processing the camera output is now organised to be easily integrated into the quadcopter code. This’ll happen for Hermione once I’ve built her with a new PCB and 512MB memory A+. I suspect the processing overhead of this is very light, given that most of the video processing happens in the GPU, and the motion processing is some very simple averaging.

#!/usr/bin/python
from __future__ import division
import os
import sys
import picamera
import select
import struct
import subprocess
import signal
import time

####################################################################################################
#
# Motion.py - motion tracker based upon video-frame macro-blocks.  Note this happens per flight - 
#             nothing is carried between flights when this is merged with the quadcopter code.
#
####################################################################################################

#--------------------------------------------------------------------------------------------------
# Video at 10fps. Each frame is 320 x 320 pixels.  Each macro-block is 16 x 16 pixels.  Due to an 
# extra column of macro-blocks (dunno why), that means each frame breaks down into 21 columns by 
# 20 rows = 420 macro-blocks, each of which is 4 bytes - 1 signed byte X, 1 signed byte Y and 2 unsigned
# bytes SAD (sum of absolute differences). 
#--------------------------------------------------------------------------------------------------
def RecordVideo():
    print "Video process: started"
    with picamera.PiCamera() as camera:
        camera.resolution = (320, 320)
        camera.framerate = 10

        camera.start_recording('/dev/null', format='h264', motion_output="/dev/shm/motion_stream", quality=23)

        try:
            while True:
                camera.wait_recording(1.0)
        except KeyboardInterrupt:
            pass
        finally:            
            try:
                camera.stop_recording()
            except IOError:
                pass
    print "Video process: stopped"

#---------------------------------------------------------------------------------------------------
# Check if I am the video process            
#---------------------------------------------------------------------------------------------------
if len(sys.argv) > 1 and sys.argv[1] == "video":
    RecordVideo()
    sys.exit()

#---------------------------------------------------------------------------------------------------
# Setup a shared memory based data stream for the PiCamera video motion output
#---------------------------------------------------------------------------------------------------
os.mkfifo("/dev/shm/motion_stream")

#---------------------------------------------------------------------------------------------------
# Start up the video camera as a new process. Run it in its own process group so that Ctrl-C doesn't
# get through.
#---------------------------------------------------------------------------------------------------
def Daemonize():
    os.setpgrp()
video = subprocess.Popen(["python", "motion.py", "video"], preexec_fn =  Daemonize)

#---------------------------------------------------------------------------------------------------
# Off we go; set up the format for parsing a frame of macro blocks
#---------------------------------------------------------------------------------------------------
format = '=' + 'bbH' * int(1680 / 4)

#---------------------------------------------------------------------------------------------------
# Wait until we can open the FIFO from the video process
#---------------------------------------------------------------------------------------------------
camera_installed = True
read_list = []
write_list = []
exception_list = []

if camera_installed:
    while True:
        try:
            py_fifo = open("/dev/shm/motion_stream", "rb")
        except:
            continue
        else:
            break
    print "Main process: fifo opened"
    read_list = [py_fifo]


motion_data = open("motion_data.csv", "w")
motion_data.write("idx, idy, adx, ady, sad\n")

total_bytes = 0
frame_rate = 10
scale = 10000

#---------------------------------------------------------------------------------------------------
# Per frame distance and velocity increments
#---------------------------------------------------------------------------------------------------
ivx = 0.0
ivy = 0.0
idx = 0.0
idy = 0.0

#---------------------------------------------------------------------------------------------------
# Per flight absolute distance and velocity integrals
#---------------------------------------------------------------------------------------------------
avx = 0.0
avy = 0.0
adx = 0.0
ady = 0.0

start_time = time.time()

try:
    while True:
        #-------------------------------------------------------------------------------------------
        # The sleep time for select is defined by how many batches of data are sitting in the FIFO
        # compared to how many we want to process per motion processing (samples per motion)
        #
        # timeout = (samples_per_motion - mpu6050.numFIFOSamles()) / sampling_rate
        # check for negative timeout.
        #-------------------------------------------------------------------------------------------

        #-------------------------------------------------------------------------------------------
        # Wait for the next whole frame
        #-------------------------------------------------------------------------------------------
        read_out, write_out, exception_out = select.select(read_list, write_list, exception_list)

        if camera_installed and len(read_out) != 0:
            #---------------------------------------------------------------------------------------
            # We have new data on the video FIFO; get it, and make sure it really is a whole frame of 
            # 21 columns x 20 rows x 4 bytes for macro block.
            #---------------------------------------------------------------------------------------
            frame = py_fifo.read(1680)
            if len(frame) != 1680:
                print "ERROR: incomplete frame received"
                break

            #---------------------------------------------------------------------------------------
            # Convert to byte, byte, ushort of x, y, sad
            #---------------------------------------------------------------------------------------
            iframe = struct.unpack(format, frame)

            #---------------------------------------------------------------------------------------
            # Iterate through the 21 x 20 macro blocks averaging the X and Y vectors of the frame based
            # upon the SAD (sum of absolute differences, lower is better).  
            #---------------------------------------------------------------------------------------
            ivx = 0.0
            ivy = 0.0
            sad = 0

            for ii in range(0, 420 * 3, 3):
                ivy += iframe[ii]
                ivx += iframe[ii + 1]
                sad += iframe[ii + 2]

            #---------------------------------------------------------------------------------------
            # Scale the macro block values to the speed increment in meters per second
            #---------------------------------------------------------------------------------------
            ivx /= scale
            ivy /= scale     

            #---------------------------------------------------------------------------------------
            # Use the frame rate to convert velocity increment to distance increment
            #---------------------------------------------------------------------------------------
            idt = 1 / frame_rate
            idx = ivx * idt
            idy = ivy * idt

            #---------------------------------------------------------------------------------------
            # Integrate (sum due to fixed frame rate) the increments to produce total distance and velocity.
            # Note that when producing the diagnostic earth frame distance, it's the increment in distance
            # that's rotated and added to the total ed*
            #---------------------------------------------------------------------------------------
            avx += ivx
            avy += ivy
            adx += idx
            ady += idy

            time_stamp = time.time() - start_time

            motion_data.write("%f, %f, %f, %f, %f, %d\n" % (time_stamp, idx, idy, adx, ady, sad))

except:
    pass


#---------------------------------------------------------------------------------------------------
# Stop the video process
#---------------------------------------------------------------------------------------------------
video.send_signal(signal.SIGINT)

motion_data.flush()
motion_data.close()

py_fifo.close()
os.unlink("/dev/shm/motion_stream")

 

Better macro-block tracking

With a more careful test this morning, here’s what I got.

Video macro-block tracking

Video macro-block tracking

This is at least as good as the PX4FLOW, so that’s now shelved, and testing will continue with the Raspberry Pi Camera.  I upgraded the camera to the new version for this test, as that what I’ll be installing on Hermione.

The cross of the diagonal and vertical should have happened lower so that the diagnonals to the right of the vertical overlapped – these are the exit from and reentry to the house from the garden.  There are multiple possible reasons for this, and because this is now my code, I can play to resolve the offset; something I simply couldn’t do with PX4FLOW and its offsets.

Next step is to sort out the units, including which direction the camera X and Y are facing in comparison with how Hermione’s X and Y from the accelerometer and gyro are facing.