Penelope++

I’m still working on Penelope in the background, primary on her physical frame legs and body – subtle but better in my opinion; lighter yet longer legs combined with a longer overlapping frame makes her lighter and yet more stable in flight and a better protection of winter weather conditions.

There are some refined code changes here too here.

With regard to the on-board camera to be added to her, currently this is stalled due to both servo accuracy and video live streaming to another RPi screen, specifically displaying the video live.  Until boredom overtakes frustration, progress will be slow!

 

Penelope, Percy and Pat.

When I was at the latest Cotswold Jam, one of the regulars suggested adding a camera to one of my piDrones to video its flight firsthand; that planted a seed which blossomed overnight:

  • Set up a live video stream from a RPi0W attached to one of my piDrones, the output of which is sent over WiFi to a RPi3+ RC touch-screen and display the video on a screen app there
  • Add on-screen pseudo-buttons to the RPi3+ RC touch-screen and use those to record the video to disk if specified
  • Add 2 on-screen pseudo-joysticks on the RPI3+ touch-screen RC, sending it to the piDrone, much like the physical joysticks do now
  • Finally, add IMU / servos hardware / software to keep the camera stable when it’s attached to a flying piDrone – trivial compared to the items above.

I’m completely ignorant how to implement all but the last item, much like the challenge to build the piDrones 6 years ago and hence that’s a fab challenge!  And in comparison to the piDrone itself, it’ll be cheap:  the parts either I already own, or are cheap to buy.  And I like the fact it gives a unique role for Penelope – currently she’s just Hermione without the object avoidance.

First job though is to name the Raspberry Pi’s:

 

Why six years?

It probably took a years at the beginning to get the system working at a basic level, and a couple of years at the end adding cool features like GPS location tracking, object-avoidance, human remote-control, custom cool lids etc.  So what happened in the intermediate three years?

The biggest timer-killer was drift in a hover: the accelerometer measures gravity and real acceleration along three perpendicular axes.  At the start of a flight, it reads a value for gravity before takeoff; during the flight, integration of new readings against the initial gravity reading provides velocity.  In a calm summer’s day, all worked well, but in other conditions, there were often collisions with brick-walls etc.  Essentially, the sensors say they weren’t moving whereas reality said it was.  It took years to recognise the correlation between winds, weather and crashes: lots of iterative speculation spanning the seasons were required to recognise the link to temperature variations*.

There are two factors: first, the IMU has been optimised to operate above 20°C – below that and small temperature changes lead to significant drift in values; secondly, once in flight, weather- and prop-breeze cools the sensor compared to the ground measurement of gravity.  I tried lots of ways to handle the temperature changes; at one point, I even bought a beer-fridge for mapping accelerometer gravity values against temperature!  It’s back to its intentional purpose now.

Digging back, it’s clear how others have coped:

  • DIY RC drones used synchronising the IMU and RC each flight along with sheltering the IMU from the wind.  The pilot wasn’t interested in static hover and is part of the feedback loop for where it to go.
  • at the bleeding edge, the DJI Mavic has a dynamic cooling system embedded in the depths of the frame keeping the IMU at a fixed temperature, along with two ground-facing cameras and GPS to long-term fine tuning.
  • All videos I saw were from the California coastline!!!

But I did it my way, the DIY experimentation way resulting ultimately with passive temperature stability by wrapping the IMU in a case to suppress wind cooling, combined with a Butterworth low-pass filter to extract gravity long-term, and the LiDAR / RPi camera to track the middle ground.  I perfected reinvention of the wheel!

Hindsight is wonderful, isn’t it!


*My apologies for the complexity of this sentence, it represents the frustration and complexity I encountered working this out over years!

What Zoe saw.

I’ve finally found and almost resolved the instability problem with Zoe.  Zoe is a Pi0W compared to Hermione’s B3.  As a result, she’s a single CPU at lower frequency and she’s struggling to keep up.  In particular the LiDAR / Video distance increasing lag behind the IMU:

Distance

Distance

As a result, once fused, Zoe is reacting to historic movements and so drifting.  The IMU data processing takes priority and by slowing it down to 333Hz (from 500), that allows enough space to process the LiDAR / Video distance to stay in sync with the IMU.  Here’s the result for a simple 10 second hover.

There is still lagging drift but much less than seen previously; this drift is still cumulative; hence my next step is to reduce the video frame size a little more.

While this might not be the reason behind the RC drift, it cannot have been helping.


By the way, the fact the incrementally lagging drift is consistently left / right suggests strongly that I need to reduce the port / starboard PID due to the weight balance, primarily the LiPo battery aligned fore / aft in the frame.  On the plus side, without this flaw, I’d never have been able to diagnose the drift problem so clearly and quickly!

Irritations and Innovations

No breakthroughs to report but:

  • Zoe is now running indoors safely with or without motion fusion installed
  • Without the fusion, she drifts horizontally and continues to rise during hover phase: this suggests the value for gravity at takeoff has drifted during the flight, perhaps temperature related?  It’s only about 15°C in our house currently which is outside the range she works well in.  First test is to add a blob of blue tack on the IMU so it isn’t cooled by the breeze from the props.
  • With fusion, her height is much better, but she swings laterally around her takeoff point – the Garmin LiDAR lite is doing it’s job well but there’s some tuning required for the lateral motion from the Raspberry Camera.  Also it’s dark in the play room, even with the lighting on, so I’m going to add LED lighting under her motors to give the camera better site.  She’s flying over an IKEA LEKPLATS play mat, but ours seems very faded, so I’ll be getting her a new one.
  • I’ve added a whole bunch of safety trip wires so that, for example, if she’s 50cm above where the flight plan says she should be, the flight dies.  Together these make her much safer for flights indoors.
  • I’ve added enhanced scheduling to prioritise IMU over camera input when the IMU FIFO is reading half-full; this is to prevent FIFO overflows as camera processing sometimes takes a while, and the overflows have been happening a lot recently.
  • I’ve also added another couple of pairs of PIDs – I’m not sure how I got away without them before.  The equivalent controls yaw perfectly, but the pitch and roll angles were missing, skipping straight to the rotation rates instead.
    • distance (target – input) =PID=> corrective velocity target
    • velocity (target – input) =PID=> corrective acceleration target
    • acceleration target => angular target (maths to choose an angle for a desired acceleration)
    • angle (target – input) =PID=> corrective rotation target
    • rotation (target – input) =PID=> PWM output

Together all these changes require a lot of tuning, tinkering and testing; I hope to report back with a video when there’s something worth sharing.

Testing distances

I took Hermione out for a quick flight again in X8 format with her monstrous new LiPo; the I2C I/O error kicked in just before 3.5 seconds, but not before she’d just acquired hover state.  The test was driven by just the IMU, but with the Camera and Garmin LiDAR-Lite V3 tracking lateral and vertical distances, and I’m pretty impressed.  Compare and contrast these two graphs:

Distance tracking

Distance tracking

Distance tracking

Distance tracking

The Garmin height is accurate – the scale difference is due to the accelerometer reading just over 1g on the ground, so based on that and the double integration to get distance, Hermione thinks she’s higher than she is.

The Camera horizontal is pretty good too – I probably just need to turn the gain down a touch.

This bodes well for tomorrow when the capacitors arrive (thanks for the nudge in the right direction, Jean) which should decouple the Garmin power-lines from the I2C lines, and thereby (hopefully) kill the I2C errors.

3 meter square

I added some filtering to macroblock vector output, including only those with individual SAD values less than half the overall SAD average for that frame.  I then took the rig for a walk around a 3m square (measured fairly accurately), held approximately 1m above the ground.  The graph goes anti-clockwise horizontally from 0,0.  The height probably descended a little towards the end hence the overrun from >1000 to <0 vertically.

3m square walk

3m square walk

This is probably pretty perfect and more than good enough to work out the scale from macroblock vectors to meters:

1000 (vectors) = 3 (meters) @ 1m height or

meters = (0.003 * height) vectors

Previously, I’d tried to calculate the macroblock vector to meter scale as the following where macroblock per frame are lengths in pixels:

meters = (2 * height * tan(lens angle / 2) / macroblocks per frame) vectors 
meters = (2 * height * tan(48.4 / 2) / (400 / 16) vectors
meters = (0.036 * height) vectors

Clearly my mathematical guess was wrong by a factor of roughly 12.  The measurement errors for the test run were < 3% horizontally, and perhaps 5% vertically so they don’t explain the discrepancy.

I needed to investigate further how the vector values scale per frame size.  So I upped the video frame from 400 x 400 pixels to 600 x 600 pixels:

(3m @ 600 pixels)²

(3m @ 600 pixels)²

The 1000 x 1000 vector square has now turned nicely into a 1500 x 1500 vector square corresponding to the 400 x 400 to 600 x 600 frame size change.  So again,

1500 (vectors) = 3 (meters) @ 1m height or

meters = (0.002 * height) vectors

So unsurprizingly, the macroblock vectors are inversely proportional to the frame size:

meters = (scale * height / frame width) * vectors
meters = (1.2 * height / frame width) * vectors

But what’s this 1.2 represent?  I’ve done some thinking, but to no avail directly, but I think I’ve found away that I don’t need to know.  That’ll be my next post.

 

 

Everything but the kitchen sink from macro blocks?

Currently the macros blocks are just averaged to extract lateral distance increment vectors between frames.  Due to the fixed frame rate, these can produce velocity increments.  Both can then be integrated to produce absolutes.  But I suspect there’s even more information available.

Imagine videoing an image like this multi-coloured circle:

Newton Disc

Newton Disc

It’s pretty simple to see that if the camera moved laterally between two frames, it’d be pretty straight forward for the video compression algorithm to break the change down into a set of identical macro-block vectors, each showing the direction and distance the camera had shifted between frames.  This is what I’m doing now by simply averaging the macro-blocks.

But imagine the camera doesn’t move laterally, but instead it rotates and the distance between camera and circle increases.  By rotating each macro-block vector by the position it is in the frame compared to the center point and averaging the results, what results is a circumferential component representing yaw change, and an axial component representing height change.

I think the way to approach this is first to get the lateral movement by simply averaging the macro-block vectors as now; the yaw and height components will cancel themselves out.

Then by shifting the contents of the macro-block frame by the averaged lateral movement, the axis is brought to the center of the frame – some macro-blocks will be discarded to ensure the revised macro-block frame is square around the new center point.

Each of the macro-block vectors is then rotated according to the position in the new square frame.The angle of each macro-block in the frame is pretty easy to work out (e.g. a 4×4 square has rotation angles of 45, 135, 225 and 315 degrees, 9×9 square has blocks to be rotated by 0, 45, 90, 135, 180, 225, 270, 315 degrees), so now averaging the X and Y axis of these rotated macro-block vectors gives a measure of yaw and size change (height).  I’d need to include distance from the center when averaging out these rotated blocks.

At a push, even pitch and roll could be obtained because they would distort the circle into an oval.

Yes, there’s calibration to do, and there’s a dependency on textured multicoloured surfaces, and the accuracy will be very dependent on frame size and rate.  Nevertheless, in the perfect world, it should all just work(TM).  How cool would that be to having the Raspberry Pi camera providing this level of information!  No other sensors would be required except for a compass for orientation, GPS for location, and LiDAR for obstacle detection and avoidance.  How cool would that be!

Anyway, enough dreaming for now, back to the real world!

DisneyLand Paris :-(

We’re off on holiday tomorrow, so I’m leaving myself this note to record the state of play: the new A+ 512MB RAM is overclocked to 1GHz setup with Chloe’s SD card running March Jessie renamed Hermione.  New PCBs have arrives and one is made up.  The new PCB is installed and has passed basic testing of I2C and PWM

To do:

  • install LEDDAR and Pi Camera onto the underside
  • update python picamera to the new version
  • test motion.py on hermione
  • merge motion.py with X8.py
  • work out why udhcpd still doesn’t work on May Jessie-lite (different SD card)

Progress report

Here’s Chloe’s HoG in Hermione’s frame.  You can see I’ve put a large WiFi antenna on one of the side platforms; the other is for GPS in the future.  The frame itself is not quite complete – I still need to install a platform on the underside to hang the sensors off.  In addition, the LID (LiDAR Installation Desktop) needs assembling – it’s just here for show currently.

Chloe's HoG in Hermione's frame

Chloe’s HoG in Hermione’s frame

Here’s a flight with just accelerometer and gyro in control for basic stability testing.

With these 1340 high pitch Chinese CF props, there’s no shortage of lift power despite the fact she weighs 2.8kg, so I’m going to defer the X8 format for a while on financial grounds – 4 new T-motor U3 motors and 40A ESCs costs nearly £400.

The PCBs are on order, and first setup will be for LEDDAR and PX4FLOW.

Oddly, only one of my PX4FLOWs works properly – for some reason, the newer one can’t see the URF, so can’t provide velocities, only angular shift rates; however, LEDDAR will give me the height allowing me to convert the angular rates to horizontal velocities.  If that works, that also opens up the possibility of replacing the PX4FLOW with a Raspi Camera using the H.264 video macro block increments to allow me to do the equivalent of the PX4FLOW motion processing myself, which if possible, would please me greatly.

Still lurking in the background is getting the compass working to overcome the huge yaw you can see in the video.