What’s better than a PX4FLOW?

The Raspberry Pi camera of course:

RaspiCam Video motion blocks

RaspiCam Video motion blocks

A very similar walk around the garden as before, but running the Raspberry Pi camera, ground facing, videoing at 10 frames per second at 320 x 320 resolution, producing 16 x 16 macro-blocks per frame, which are averaged per frame and logged.

The macro blocks give the pixel shift between one frame and the next to help with the frame compression; I’m not sure whether the units are in pixels or macro blocks, but that’s simple to resolve.  Combined with the height from the LEDDAR, and the focal length of the lens, it’ll be trivial to convert these readings to a distance in meters.

The results here are at least as good as the PX4FLOW, if not better, and the processing of the macro-blocks to distance is very lightweight.

This is definitely worth pursuing as it’s much more in keeping with how I want this to work.  The PX4FLOW has served its purpose well in that with my understanding how it worked, it opened up how it could be replaced with the RPi Camera.

There are further bonuses too: because of the video fixed frame rate, the macro blocks are producing distance increments, whereas the PX4FLOW only produced velocities, and that means I can add in horizontal distance PIDs to kill drift and ensure the quad always hovers over the same spot.  And even better, I’m no longer gated on the arrival of the new PCBs: these were required for X8 and I2C for the PX4FLOW; I’ll need them eventually for X8 but for now, the current PCB provides everything I need.

We are most definitely on a roll again!

The death of kitty++

The h.264 macro-block data that the GPU spits out can’t be used for an accurate, long term motion tracking sensor complementing the integrated accelerometer for short term accuracy and high resolution. Period 🙁

It works perfectly detecting parts of a video frame that have moved since the previous video frame, and therefore optimize compression of the new frame by only including the changes to the previous frame.  That’s what it’s designed to do and it does it well.

But what’s required for my horizontal motion tracking is an accurate measure of how much and in which direction every pixel has moved from one frame to the next.  And it almost does a perfect job of this, except for two reasons which completely rule out its usage for this cause:

  • periodically, the macro-blocks are reset for the frame – that means there is no measure of the velocity of pixels compared to the previous frame.
  • each macro-block motion vector is accompanies with a sum of absolute differences (SAD) – a numerical value for the level of confidence in the vectors accuracy.  Testing yesterday revealed that these values are big and fluctuate smoothly across a large range of values for a pair of frames without obvious high and low confidence values.  And that means it’s not possible to combine the vectors and the SAD values to get an accurate vector for the whole frame.

So it’s back to the laser dot tracking as the only possible solution until the Kickstarter  LiDAR I backed is launched.

 

Motion vectors from Raspicam videos

After a gentle nudge from Jeremy and his pointy stick yesterday, I had a quick play with Kitty++.  Follow the link for what she does and how, but in summary, she uses the Raspicam video to churn out the ‘motion vectors’ between video frames as part of the h.264 video compression algorithm.  Because the video frames are at a fixed rate (10fps in my case), then how ‘far’ the frame have moved is actually how ‘fast’ – i.e. a velocity vector that could be merged / blended / fused with the accelerometer velocity as a way to constrain drift.

The sub-project stalled last year due to lack of believable data, but yesterday, after a couple of bug fixes, here’s a chart showing the movement over a 10s ‘flight’ of my prototyping Pi.  I was moving the Pi forwards and backwards and left and right in a cross shape – you can see the movement, but the cross shape gets corrupted as periodically the video compression algorithm resets itself to maintain accuracy – a problem for this integrated distance chart, but not for the velocity vector I need.

Raspicam motion detection

Raspicam motion detection

If you follow the line from the 0,0 point, you can see the line travelling left / right first, then up / down.  This is 100 frames over 10 seconds.

There’s a lot of details to be tweaked to make this work properly especially to get units correct but it’s now another viable solution in addition to the laser dot tracking for motion control.  The advantage of this over the laser tracking is I can get lots more frames a second so the motion fusion can happen much faster than the 1s limit I’m seeing using the laser tracking.

P.S. The only real down side of this is that without a camera, Zoe can’t use it, and Zoe will continue to be my preferred show-and-tell quad as she’s easier to transport.  So I will continue to work out why Zoe drift the way she shouldn’t be according to the 0g offsets.  I do have one idea to test later today if I get the chance.

Scratched record…

To get Zoe or Phoebe any better, I need more sensors.

  1. GPS: I’ve heard that while GPS may only have an accuracy of 10m, the resolution is a lot finer and is stable; that could be good for outdoors but useless for indoors.
  2. My various laser tracking solution should be good for indoors.  My current best idea is 2 down-facing lasers on the frame and one hand held also pointing at the floor; combined with the RaspiCam could give height and and a simple degree of line tracking.  Probably not good enough for use outside in bright sunlight however.
  3. The ultrasonic range finder could provide height indoors and out, but can’t help with horizontal drift
  4. The barometer is a chocolate teapot to me – despite high resolution, indoor air-pressure fluctuations will spoil it
  5. The compass could be useful for yaw, but only becomes worth its weight in gold alongside GPS: orientation and location

Currently, 2 sounds the most viable, useful, simple and cool – imagine taking a quadcopter for a walk indoors following the 3 laser dots on the floor trying to keep them in an equilateral triangle of fixed size.  Except…

Zoe Pi Zero doesn’t have a camera connector so that rules her out.

Phoebe’s A+ is powered solely by the LiPo to make space underneath her for the camera and URF.  That makes it much harder to run the code without also running the motors, so I need to sort that out to test the new idea safely.  Then there’s the issue that I think the laser processing will a separate CPU to process the pictures – i.e. the A3 due for delivery some time this year.

Oh, and then I’ve just found this.

It might be time to take a break?

 

Another step of fine tuning

With the IMU sample rate set to 250 Hz (a 4ms gap between samples), there should be enough time to run motion processing for each sample without missing any; based on experience, motion processing takes 2 to 3ms. I’m currently averaging 5 samples before invoking motion processing (50Hz updates to the props).  Today I did some testing setting this to 2 and 1 (i.e. 125Hz and 250Hz updates to the props).

Setting 5 samples per motion processing gives 3430 samples @ 250Hz = 13.720 seconds of flight. time.time() says it took 13.77s = 99.63% of samples were caught.

Setting 2 samples per motion processing gave 3503 samples @ 250Hz = 14.012 seconds of flight. time.time() says it took 14.08s = 99.5% of samples were caught.

Setting 1 sample per motion processing gave 3502 samples @ 250Hz = 14.08 seconds of flight time. time.time() says it took 14.22s = 98.5% of samples were caught.

I’m guessing the slight decline is due to Linux scheduling; I chose to opt for 2 samples per motion processing, which updates the props at 125Hz or every 8ms.

And boy was the flight much smoother by having the more frequent, smaller increments to the props.

And I reckoned with these faster, less-lag updates to the motors, I might be able to trust the gyro readings for longer, so I changed the complementary filter tau (incrementally) to 5s from its previous 0.5s.

The sharp sighted of you may have already seen the results in the numbers: I’ve now breached my 10s flight time target by 2 seconds (the other couple of seconds is warm-up time), with the same level of drift I could only get in 6 second flights a few weeks ago. That 10s target was what I’d set myself to then look feeding other motion processing sensors based upon the RaspiCam, either laser (Kitty) or MPEG macro-block (Kitty++) tracking.

Only down side – it’s rained nearly all day, so no change of capturing one of these long flights on video. Perhaps tomorrow?

Motion sensors

Other than a few test flights, I’ve now put Phoebe on hold so as not to break her before the CotswoldJam at the end of September.  I’ve bought her a new case so I can carry her around safely:

Phoebe's Peli 1600 case

Phoebe’s Peli 1600 case

So this morning, I picked up doing motion processing using the RaspiCam YUV macro-block output – kitty++.py, partly triggered by a pingback yesterday.  The motion processing (in a very simple form) is working fine, but it only produces a CSV file as output allowing me to see the data as a graph in Excel:

Zig-zagging for 10 seconds

Zig-zagging for 10 seconds

Ideally for testing, I’d want a small screen to show the direction / speed the motion processing is detecting.  And as she’s headless, I’d like to add a button to press so that I can do various tests on demand while she’s headless.  In one of the twists of fate, the post brought my new E-paper HAT.  Here’s it installed on Kitty:

Kitty's E-paper screen

Kitty’s E-paper screen

The camera is stuck underneath in one of the super-slim cases I found.

I now need to install the drivers, and update the code to draw variable length arrows for the orientation / speed vector.

After that, I need to add the ultrasonic range finder to get the distance from the ground.  I’ve got a few of these – they’ll do for Kitty, but with their 100kbps I2C baudrate, they’re not good for Phoebe who needs 400kbps I2C baudrate to capture all the sensor data.

Should keep me out of trouble for a while!

Random bits and bobs

Cotswold Jam and Cambridge Robot Wars

Unless I manage to break Phoebe before December, I intend to take her to the Cotswold Jam in September (where I’m one of the founders / organisers) and the Cambridge PiWars in December.

The code I’ll be running there will most like what I uploaded to GitHub yesterday.

Until then?

Over the next couple of months, I’ll probably just enjoy flying Phoebe and Chloe, and perhaps treat them to some new batteries – primarily so the colours match the rest of their frames!

I might add an angular (rather than motion) control version so I can show the difference in behaviours at the Jams.

Otherwise I’ll try to keep my tinkering to PID tuning unless the A2 appears leaving me enough spare CPU’s to play more with threading (QCISRFIFO.py), Kitty and Kitty++.

What I won’t be doing

I won’t be adding a human into the feedback loop – no remote control – sorry to those of you who have nagged me to do this.

I also won’t be blogging as much as I’ll have less to blog about, other than flight videos.  Also it’s clear from the blog stats people are starting to get bored…

Blog bandwidth usage

The underlying decline actually started in February but hidden behind the Build Your Own Automomous Quadcopter – Bill of Materials, Assembly and Testing (BYOAQ-BAT) articles in February, and the fact PC World included me in their 10 insanely innovative, incredibly cool Raspberry Pi projects article in March.

Blog bandwidth

Blog bandwidth

So I think for a while it’s TTFN but no doubt I’ll be back.

Minions

Here’s Phoebe showing anything Chloe can do, she can do better.

Phoebe flies from Andy Baker on Vimeo.

I’ve upgraded her to T-motor MN3110 750kV motors, and I’ve upgraded them both to the Chinese equivalent of the T-motors props – £10 versus £80 for a set of 4 – they’re stronger and much cheaper so they’ll pay for Phoebe’s new motors in no time.

Once more, I think Phoebe and Chloe are both as good as they can be without a big performance boost (Raspberry Pi A2  or GPIO / RPIO changed to use CFFI for PyPy) to allow

  • separation of sensor sampling and motion processing into separate processes on separate processors
  • kitty(++) to provide laser or picamera motion tracking.

I know I said I’d be implementing kitty(++) on my A+ HoGs, but with a single CPU, anything kitty(++) does is to likely to steal CPU cycles from HoG and so more samples would get lost. I’ll keep working on kitty++ motion from the picamera codex macro-blocks as this has the lesser CPU impact due to its use of the GPU but I’m not convinced it’ll work on just an A+.

But first, I’ll try to understand CFFI to see how hard it would be to recompile the CPython RPIO / GPIO libraries and get better PyPy performance as a result.

 

Crass assumption

The new code I posted yesterday is based on the assumption that the motion processing code takes less than 1ms, so that no samples will be missed.  I was aware that the assumption is probably not true, but thought I really ought to check this morning; net is that it takes about 3.14ms and so 3 samples are missed.  That’s more than enough to skew the accelerometer integration resulting in duff velocities and drift

I couldn’t spot any areas of the code that could be trimmed significantly, so it’s back to working out why pypy runs some much slower than CPython – about 2.5 times based on the same test.

I will be sticking with this latest code as I believe its timing is still better than the time.time() version.  I just need to speed it up a little.  FYI the pypy performance data from their site suggests there should be more than a 6 fold performance improvement based upon the pypy version (2.2.1) used for the standard Raspian distribution; that’s more than enough.

I am still tinkering with kitty++ in the background, and have the macro-block data, but need to work out the best / correct way to interpret that.  But it’s blocked because for some reason kitty’s Rasperry Pi can’t be seen by other computers on my home network other than by IP address.  Just some DHCP faff I can’t be bother to deal with at the moment.

Wasted time.time()

I’ve been doing some fine tuning of the HoG code as a result of the FIFO trial I did.  The FIFO timing came from the IMU sampling rate – there’s a sample every 1ms and so there’s no need for using time.time() when the IMU hardware clock can do it.

I’ve now applied the same to the hardware interrupt driven code; motion processing now takes less time as it was the only call to time.time(), and as such that means there’s a smaller chance of missing samples too.  I’ve also added code so that when there are I2C / data corruption errors, that counts as another 1ms gap since the code then waits for the next hardware interrupt.

I’ve also removed the zero 0 calibration again, as I’m fairly convinced it’s pointless – I _think_ the deletion of (Butterworth filter extracted) gravity from acceleration means that calibration is pointless.  That also adds a very marginal performance increase to motion processing.

A few test flights showed no regression (nor noticeable improvement), so I’ve uploaded the code to GitHub.  You also need to grab the latest version of my GPIO library as in now imports as GPIO rather than as a clone of RPi.GPIO.