Lateral motion tracking with yaw

I’m doing some very careful testing before I set Hermione loose live to fly in a circle.  This morning, I’ve confirmed the video lateral motion block tracking is working well.

For this first unpowered flight, I walked her forwards about 3m and then left by about the same.  Note that she always pointed in the same direction; I walked sideways to get the left movement:

Forward - Left

Forward – Left

For this second unpowered flight, again, I walked her forwards about 3m, but then rotated her by 90° CCW before walking another.  Because of the yaw, from her point of view, she only flew forwards, and the yaw is not exposed on the graph.  This is exactly how it should be:

Forward - Yaw 90° CCW - Forward

Forward – Yaw 90° CCW – Forward

So I’m happy the lateral motion tracking is working perfectly.  Next I need to look at the target.  I can go that with the same stats.

The only problem I had was that the sun needs to be shining bright for the video tracking to ‘fly’ above the lawn; clearly it needs the high contrast in the grass when sunlit.

That’s better…

not perfect, but dramatically better.  The flight plan was:

  • take off to 1m
  • move left over 6s at 0.25m/s while simultaneously rotating ACW 90° to point in the direction of travel
  • land.

I’d spent yesterday’s wet weather rewriting the macro-block processing code, breaking it up into 5 phases:

  1. Upload the macro block vectors into a list
  2. For each macro-block vector in the list, undo yaw that had happened between this frame and the previous one
  3. Fill up a dictionary indexed with the un-yawed macro-block vectors
  4. Scan the directory, identifying clusters of vectors and assigned scores, building a list of highest scoring vector clusters
  5. Average the few, highest scoring clusters, redo the yaw of the result from step 2, and return the resultant vector

Although this is quite a lot more processing, splitting it into five phases compared to yesterday’s code’s two means that between each phase, the IMU FIFO can be checked, and processed if it’s filling up thus avoiding a FIFO overflow.

Two remaining more subtle problems remain:

  1. She should have stayed in frame
  2. She didn’t quite rotate the 90° to head left.

Nonetheless, I once more have a beaming success smile.

One last sanity check

Before moving on to compass and GPS usage, there’s one last step I want to ensure works: lateral movement.

The flight plan is defined thus:

  • take-off in the center of a square flight plan to about 1m height
  • move left by 50cm
  • move forward by 50cm – this place her in to top left corner of the square
  • move right by 1m
  • move back by 1m
  • move left by 1m
  • move forwards by 50cm
  • move right by 50cm
  • land back at the take-off point.

The result’s not perfect despite running the ground facing camera at 640 x 640 pixels; to be honest, with lawn underneath her, I still think she did pretty well.  She’s still a little lurchy, but I think some pitch / roll rotation PID tuning over the IKEA mat should resolve this quickly.  Once again, you judge whether she achieved this 34 second flight well enough?

30s with, <10s without

A few test runs.  In summary, with the LiDAR and Camera fused with the IMU, Zoe stays over her play mat at a controlled height for the length of the 30s flight.  Without the fusion, she lasted just a few seconds before she drifted off the mat, lost her height, or headed to me with menace (kill ensued).  I think that’s pretty conclusive code fusion works!

With Fusion:

Without Fusion:

Better macro-block interpretation

Currently, getting lateral motion from a frame full of macro-blocks is very simplistic:  find the average SAD value for a frame, and then only included those vectors whose SAD is lower.

I’m quite surprised this works as well as it does but I’m fairly sure it can be improved.  There are four factors to the content of a frame of macro-blocks.

  • yaw change: all macro-block vectors will circle around the centre of the frame
  • height change: all macro-blocks vectors will point towards or away from the centre of the frame.
  • lateral motion change: all macro-blocks vectors are pointing in the same direction in the frame.
  • noise: the whole purpose of macro-blocks is simply to find the best matching blocks between two frame; doing this with a chess set (for example) could well have any block from the first frame matching any one of the 50% of the second frame.

Given a frame of macro-blocks, yaw increment between frames can found from the gyro, and thus be removed easily.

The same goes for height too derived from LiDAR.

That leaves either noise or a lateral vector.  By then averaging these values out, we can pick the vectors that are similar to the distance / direction of the average vector. SAD doesn’t come into the matter.

This won’t be my first step however: that’s to work out why the height of the flight wasn’t anything like as stable as I’d been expecting.

3 meter square

I added some filtering to macroblock vector output, including only those with individual SAD values less than half the overall SAD average for that frame.  I then took the rig for a walk around a 3m square (measured fairly accurately), held approximately 1m above the ground.  The graph goes anti-clockwise horizontally from 0,0.  The height probably descended a little towards the end hence the overrun from >1000 to <0 vertically.

3m square walk

3m square walk

This is probably pretty perfect and more than good enough to work out the scale from macroblock vectors to meters:

1000 (vectors) = 3 (meters) @ 1m height or

meters = (0.003 * height) vectors

Previously, I’d tried to calculate the macroblock vector to meter scale as the following where macroblock per frame are lengths in pixels:

meters = (2 * height * tan(lens angle / 2) / macroblocks per frame) vectors 
meters = (2 * height * tan(48.4 / 2) / (400 / 16) vectors
meters = (0.036 * height) vectors

Clearly my mathematical guess was wrong by a factor of roughly 12.  The measurement errors for the test run were < 3% horizontally, and perhaps 5% vertically so they don’t explain the discrepancy.

I needed to investigate further how the vector values scale per frame size.  So I upped the video frame from 400 x 400 pixels to 600 x 600 pixels:

(3m @ 600 pixels)²

(3m @ 600 pixels)²

The 1000 x 1000 vector square has now turned nicely into a 1500 x 1500 vector square corresponding to the 400 x 400 to 600 x 600 frame size change.  So again,

1500 (vectors) = 3 (meters) @ 1m height or

meters = (0.002 * height) vectors

So unsurprizingly, the macroblock vectors are inversely proportional to the frame size:

meters = (scale * height / frame width) * vectors
meters = (1.2 * height / frame width) * vectors

But what’s this 1.2 represent?  I’ve done some thinking, but to no avail directly, but I think I’ve found away that I don’t need to know.  That’ll be my next post.

 

 

Fusion thoughts

Here’s my approach to handling the tilt for downward facing vertical and horizontal motion tracking.

Vertical compenstation

Vertical compensation

This is what I’d already worked out to take into account any tilt of the quadcopter frame and therefore LEDDAR sensor readings.

With this in place, this is the equivalent processing for the video motion tracking:

Horizontal compensation

Horizontal compensation

The results of both are in the earth frame, as are the flight plan targets, so I think I’ll swap to earth frame processing until the last step of processing.

One problem as I’m now pushing the limit of the code keeping up with the sensors: with diagnostics on and 1kHz IMU sampling, the IMU FIFO overflows as the code can’t keep up.  This is with Zoe (1GHz CPU speed) and without LEDDAR.

LEDDAR has already forced me to drop the IMU sample rate to 500Hz on Chloe; I really hope this is enough to also allow LEDDAR, macro-block processing and diagnostics to work without FIFO overflow.  I really don’t want to drop to the next level of 333Hz if I can help it.

Coding is in process already.

Video motion working perfectly…

but I’m sure how to fix it.

Imagine a video camera facing the ground over a high colour / contrast surface.  If the camera moves forward, the macro-blocks shows forward movement.  But if the camera stays stationary, but pitches backwards, then the macro-blocks show a similar motion forwards because the rearward tilt means the camera is now points forward of where it was pointing.

The motion processing code sees this camera rearward tilt as forward motion, and so applies more rearward tilt to compensate – I think you can see where this leads too!

I’m obviously going to have to remove the perceived forward motion due to rearward pitch by applying some maths using the pitch angle, but I’m not quite sure what it is yet.  The need for this is backed up by the fact the PX4FLOW has a gyro built in, so it too must have been using it for that type of compensation.  The fact it’s a gyro suggests it may not be rotation matrix maths, but something simpler.  Thinking hat on.


Just realized I’d solved exactly this problem for height, essentially knowing height (PX4FLOW has URF) and incremental angle change in the quadcopter frame (PX4FLOW has a gyroscope), you can calculate the distance shift due to the tilt, subtract it from the camera values and thus come up with the real distance

Everything but the kitchen sink from macro blocks?

Currently the macros blocks are just averaged to extract lateral distance increment vectors between frames.  Due to the fixed frame rate, these can produce velocity increments.  Both can then be integrated to produce absolutes.  But I suspect there’s even more information available.

Imagine videoing an image like this multi-coloured circle:

Newton Disc

Newton Disc

It’s pretty simple to see that if the camera moved laterally between two frames, it’d be pretty straight forward for the video compression algorithm to break the change down into a set of identical macro-block vectors, each showing the direction and distance the camera had shifted between frames.  This is what I’m doing now by simply averaging the macro-blocks.

But imagine the camera doesn’t move laterally, but instead it rotates and the distance between camera and circle increases.  By rotating each macro-block vector by the position it is in the frame compared to the center point and averaging the results, what results is a circumferential component representing yaw change, and an axial component representing height change.

I think the way to approach this is first to get the lateral movement by simply averaging the macro-block vectors as now; the yaw and height components will cancel themselves out.

Then by shifting the contents of the macro-block frame by the averaged lateral movement, the axis is brought to the center of the frame – some macro-blocks will be discarded to ensure the revised macro-block frame is square around the new center point.

Each of the macro-block vectors is then rotated according to the position in the new square frame.The angle of each macro-block in the frame is pretty easy to work out (e.g. a 4×4 square has rotation angles of 45, 135, 225 and 315 degrees, 9×9 square has blocks to be rotated by 0, 45, 90, 135, 180, 225, 270, 315 degrees), so now averaging the X and Y axis of these rotated macro-block vectors gives a measure of yaw and size change (height).  I’d need to include distance from the center when averaging out these rotated blocks.

At a push, even pitch and roll could be obtained because they would distort the circle into an oval.

Yes, there’s calibration to do, and there’s a dependency on textured multicoloured surfaces, and the accuracy will be very dependent on frame size and rate.  Nevertheless, in the perfect world, it should all just work(TM).  How cool would that be to having the Raspberry Pi camera providing this level of information!  No other sensors would be required except for a compass for orientation, GPS for location, and LiDAR for obstacle detection and avoidance.  How cool would that be!

Anyway, enough dreaming for now, back to the real world!

The Canada Geese are flying North

No, it’s not a secret code word; great flocks of geese are flying daily over my house in V-formation, honking all the way.  It’s a magnificent site to see if you can stand the noise.  Around where I live there are many artificial lakes resulting from surface gravel mining a condition of which is that when finished, each mine is rebuild as a lake to attract the wildlife, including the geese.

Anyway, their migration up north is a true sign of Autumn, and the weather maker has taken the hint, and the wind-speeds are up in the teens again.  Which is annoying as I need to get the camera motion working smoothly outside before I can bring it inside for inclement weather testing.

Since the embarrassing  flight from the other day, I’ve made 2 changes:

  1. I’ve added complementary filters for vertical and lateral motion sensor velocity and distance inputs from the camera and LEDDAR – previously I was directly overriding the integrated IMU values, and this is the most likely cause (IMHO) of the poor flight quality.
  2. I’ve ordered a multi-coloured picnic blanket which hopefully will make the job of the macro-block code much easier compared to the high-resolution green-grass it’s previously used.

I am also considering increasing the video resolution (320 x 320) and frame rate (10Hz) up to 480 x 480 or 640 x 640 @ 20Hz. But there’s a balance here to ensure the this still allows enough spare time for the processing of the IMU sensors coming in at 100Hz.