Irritations and Innovations

No breakthroughs to report but:

  • Zoe is now running indoors safely with or without motion fusion installed
  • Without the fusion, she drifts horizontally and continues to rise during hover phase: this suggests the value for gravity at takeoff has drifted during the flight, perhaps temperature related?  It’s only about 15°C in our house currently which is outside the range she works well in.  First test is to add a blob of blue tack on the IMU so it isn’t cooled by the breeze from the props.
  • With fusion, her height is much better, but she swings laterally around her takeoff point – the Garmin LiDAR lite is doing it’s job well but there’s some tuning required for the lateral motion from the Raspberry Camera.  Also it’s dark in the play room, even with the lighting on, so I’m going to add LED lighting under her motors to give the camera better site.  She’s flying over an IKEA LEKPLATS play mat, but ours seems very faded, so I’ll be getting her a new one.
  • I’ve added a whole bunch of safety trip wires so that, for example, if she’s 50cm above where the flight plan says she should be, the flight dies.  Together these make her much safer for flights indoors.
  • I’ve added enhanced scheduling to prioritise IMU over camera input when the IMU FIFO is reading half-full; this is to prevent FIFO overflows as camera processing sometimes takes a while, and the overflows have been happening a lot recently.
  • I’ve also added another couple of pairs of PIDs – I’m not sure how I got away without them before.  The equivalent controls yaw perfectly, but the pitch and roll angles were missing, skipping straight to the rotation rates instead.
    • distance (target – input) =PID=> corrective velocity target
    • velocity (target – input) =PID=> corrective acceleration target
    • acceleration target => angular target (maths to choose an angle for a desired acceleration)
    • angle (target – input) =PID=> corrective rotation target
    • rotation (target – input) =PID=> PWM output

Together all these changes require a lot of tuning, tinkering and testing; I hope to report back with a video when there’s something worth sharing.

Testing distances

I took Hermione out for a quick flight again in X8 format with her monstrous new LiPo; the I2C I/O error kicked in just before 3.5 seconds, but not before she’d just acquired hover state.  The test was driven by just the IMU, but with the Camera and Garmin LiDAR-Lite V3 tracking lateral and vertical distances, and I’m pretty impressed.  Compare and contrast these two graphs:

Distance tracking

Distance tracking

Distance tracking

Distance tracking

The Garmin height is accurate – the scale difference is due to the accelerometer reading just over 1g on the ground, so based on that and the double integration to get distance, Hermione thinks she’s higher than she is.

The Camera horizontal is pretty good too – I probably just need to turn the gain down a touch.

This bodes well for tomorrow when the capacitors arrive (thanks for the nudge in the right direction, Jean) which should decouple the Garmin power-lines from the I2C lines, and thereby (hopefully) kill the I2C errors.

3 meter square

I added some filtering to macroblock vector output, including only those with individual SAD values less than half the overall SAD average for that frame.  I then took the rig for a walk around a 3m square (measured fairly accurately), held approximately 1m above the ground.  The graph goes anti-clockwise horizontally from 0,0.  The height probably descended a little towards the end hence the overrun from >1000 to <0 vertically.

3m square walk

3m square walk

This is probably pretty perfect and more than good enough to work out the scale from macroblock vectors to meters:

1000 (vectors) = 3 (meters) @ 1m height or

meters = (0.003 * height) vectors

Previously, I’d tried to calculate the macroblock vector to meter scale as the following where macroblock per frame are lengths in pixels:

meters = (2 * height * tan(lens angle / 2) / macroblocks per frame) vectors 
meters = (2 * height * tan(48.4 / 2) / (400 / 16) vectors
meters = (0.036 * height) vectors

Clearly my mathematical guess was wrong by a factor of roughly 12.  The measurement errors for the test run were < 3% horizontally, and perhaps 5% vertically so they don’t explain the discrepancy.

I needed to investigate further how the vector values scale per frame size.  So I upped the video frame from 400 x 400 pixels to 600 x 600 pixels:

(3m @ 600 pixels)²

(3m @ 600 pixels)²

The 1000 x 1000 vector square has now turned nicely into a 1500 x 1500 vector square corresponding to the 400 x 400 to 600 x 600 frame size change.  So again,

1500 (vectors) = 3 (meters) @ 1m height or

meters = (0.002 * height) vectors

So unsurprizingly, the macroblock vectors are inversely proportional to the frame size:

meters = (scale * height / frame width) * vectors
meters = (1.2 * height / frame width) * vectors

But what’s this 1.2 represent?  I’ve done some thinking, but to no avail directly, but I think I’ve found away that I don’t need to know.  That’ll be my next post.

 

 

Everything but the kitchen sink from macro blocks?

Currently the macros blocks are just averaged to extract lateral distance increment vectors between frames.  Due to the fixed frame rate, these can produce velocity increments.  Both can then be integrated to produce absolutes.  But I suspect there’s even more information available.

Imagine videoing an image like this multi-coloured circle:

Newton Disc

Newton Disc

It’s pretty simple to see that if the camera moved laterally between two frames, it’d be pretty straight forward for the video compression algorithm to break the change down into a set of identical macro-block vectors, each showing the direction and distance the camera had shifted between frames.  This is what I’m doing now by simply averaging the macro-blocks.

But imagine the camera doesn’t move laterally, but instead it rotates and the distance between camera and circle increases.  By rotating each macro-block vector by the position it is in the frame compared to the center point and averaging the results, what results is a circumferential component representing yaw change, and an axial component representing height change.

I think the way to approach this is first to get the lateral movement by simply averaging the macro-block vectors as now; the yaw and height components will cancel themselves out.

Then by shifting the contents of the macro-block frame by the averaged lateral movement, the axis is brought to the center of the frame – some macro-blocks will be discarded to ensure the revised macro-block frame is square around the new center point.

Each of the macro-block vectors is then rotated according to the position in the new square frame.The angle of each macro-block in the frame is pretty easy to work out (e.g. a 4×4 square has rotation angles of 45, 135, 225 and 315 degrees, 9×9 square has blocks to be rotated by 0, 45, 90, 135, 180, 225, 270, 315 degrees), so now averaging the X and Y axis of these rotated macro-block vectors gives a measure of yaw and size change (height).  I’d need to include distance from the center when averaging out these rotated blocks.

At a push, even pitch and roll could be obtained because they would distort the circle into an oval.

Yes, there’s calibration to do, and there’s a dependency on textured multicoloured surfaces, and the accuracy will be very dependent on frame size and rate.  Nevertheless, in the perfect world, it should all just work(TM).  How cool would that be to having the Raspberry Pi camera providing this level of information!  No other sensors would be required except for a compass for orientation, GPS for location, and LiDAR for obstacle detection and avoidance.  How cool would that be!

Anyway, enough dreaming for now, back to the real world!

DisneyLand Paris :-(

We’re off on holiday tomorrow, so I’m leaving myself this note to record the state of play: the new A+ 512MB RAM is overclocked to 1GHz setup with Chloe’s SD card running March Jessie renamed Hermione.  New PCBs have arrives and one is made up.  The new PCB is installed and has passed basic testing of I2C and PWM

To do:

  • install LEDDAR and Pi Camera onto the underside
  • update python picamera to the new version
  • test motion.py on hermione
  • merge motion.py with X8.py
  • work out why udhcpd still doesn’t work on May Jessie-lite (different SD card)

Progress report

Here’s Chloe’s HoG in Hermione’s frame.  You can see I’ve put a large WiFi antenna on one of the side platforms; the other is for GPS in the future.  The frame itself is not quite complete – I still need to install a platform on the underside to hang the sensors off.  In addition, the LID (LiDAR Installation Desktop) needs assembling – it’s just here for show currently.

Chloe's HoG in Hermione's frame

Chloe’s HoG in Hermione’s frame

Here’s a flight with just accelerometer and gyro in control for basic stability testing.

With these 1340 high pitch Chinese CF props, there’s no shortage of lift power despite the fact she weighs 2.8kg, so I’m going to defer the X8 format for a while on financial grounds – 4 new T-motor U3 motors and 40A ESCs costs nearly £400.

The PCBs are on order, and first setup will be for LEDDAR and PX4FLOW.

Oddly, only one of my PX4FLOWs works properly – for some reason, the newer one can’t see the URF, so can’t provide velocities, only angular shift rates; however, LEDDAR will give me the height allowing me to convert the angular rates to horizontal velocities.  If that works, that also opens up the possibility of replacing the PX4FLOW with a Raspi Camera using the H.264 video macro block increments to allow me to do the equivalent of the PX4FLOW motion processing myself, which if possible, would please me greatly.

Still lurking in the background is getting the compass working to overcome the huge yaw you can see in the video.

 

Feeding frenzy

My hunger to shop has just been sated: 2 new Pi Zeros + the new 8 mega pixel camera module + the Zero camera cable + Unicorm HAT and diffuser plate.

Yes, you heard me right, the latest Pi Zeros now have a camera slot.

One Zero is for showing off at the next Cotswold Jam, along with the new higher-resolution 8 megapixel camera module.  Perhaps set up as a tiny onesie cam?

The other is for Zoe – and maybe another camera too, once I’ve overcome the problem with WAPping the latest jessie, the use of which is mandatory if I want to use the new higher-res camera with her purely for FPV videos.  The new camera slot points horizontally which is great for retaining the low profile of the Pi Zero.

The unicorn and diffuser are mostly just for playing with on my main Pi.

Next frenzy will start when they launch the A3 which I’ll be transferring Phoebe to for the extra CPU cores and processor speed needed for the two LiDAR units I’ll eventually be installing.

Until then, I’ll be working on repaying my overdraft!

3 point laser tracking

This is a long detailed post – grab a cuppa if you intend to plough through.

Here’s the plan for Phoebe.

There are 3 downward facing red lasers. Two are attached to the underside of her rear arms both pointing in parallel along Phoebe’s Z axis i.e. if Phoebe is horizontal then the laser beams are pointing vertically downwards.  The third laser is handheld by a human. All are probably 5mW / Class 2 – although the power rating may need to be reduced to conform with legislation which is unclear.  5mW is safe due to the human blink reaction; 1mW is safe as long as it’s not viewed through a focusing device such as a lens.

The RaspiCam with NoIR filter is fitted in the center of Phoebe’s lower plate, also facing along her Z axis.  A red camera-style gel filter is fitted over it in the expectation that this will increase the contrast between the laser dots and the rest of the background.  The camera is set to ISO 800 – its maximum sensitivity.  The photos are low resolution to reduce the processing required.  Each shot is taken in YUV mode, meaning the first half of the photo data is luminance / brightness / contrast information.  Photos are taken as fast as possible, which may actually be only a few per second due to the lighting conditions.  The camera code runs on a separate thread from Phoebe’s main flight control code.

A typical flight takes places as follows:

Immediately prior to each flight, the camera is turned on and feeds its output into an OS FIFO.

Quadcopter take-off code is unchanged using the standard flight plan configuration to attain an approximation of the desired hover height (e.g. 3s at 0.3m/s gives roughly 90cm hover height).

Once at hover each motion processing loop, the motion processing code checks whether the FIFO is not empty, and if not empty, it is emptied and the last batch of camera data (i.e. the last shot taken) is processed.

It is scanned for bright dots and their position in the frame stored.  By using the red filter and the Y channel (brightness / contract / luminance of YUV)  from the camera, and because the lasers are fixed in Phoebe’s frame with respect to the camera, the dots should stand out in the photo, and lie between the center and the bottom corners of the photo.  If bright spots are detected in this area, there is a very high level of confidence that these are the red dots from the frame lasers.  The distance between the dots in pixels is proportional to the actual height in meters based upon the camera lens focal length.

This pixel-separation height is saved at the first pass in the hover phase and used thereafter as the target height; deviation in the separation of the dots compared to the target dot separation means a height error which is fed as target change to the vertical velocity PID.

Once the height is processed as above, any third similarly bright dot is assumed to be from the human laser.  If such a dot is not found in 5 seconds, the code moved irreversibly to descent mode.

However if a 3rd dot is found in that 5s period then it’s position relative to the frame laser dots provides targets to

  • the yaw PID so that the 3 dots form an isosceles triangle with the quad laser dots at the base and the human dot is at the peak
  • the horizontal velocity PID so that the 3 dots form an equilateral triangle.

Loss of the human dot returns to frame laser dot mode for 5 seconds to reacquire the lost human dot which if not found, triggers the irreversible standard descent mode based upon the flight plan alone.

Similarly, loss of either of the frame laser dots triggers irreversible standard descent mode but without any wait for reacquisition of the missing frame dot.

This should provide stable hover flight for as long as the battery lasts, with the option of following the human dot to take the Quad for a “walk” on a “laser leash”.

Sounds like a plan, doesn’t is?  Some details and concerns, primarily so I don’t forget:

  • -l per-flight command line control of laser tracking mode
  • mosfets to switch on lasers based on the above?  Depends on whether GPIO pins can drive 2 10mA lasers
  • Is a new PCB needed to expose GPIO switch pin for lasers? If so, don’t forget the pull down resistor!
  • Prototype can be done in complete isolation from Phoebe, using one of my many spare RPi’s along with some LEGO and my as yet unused PaPiRus e-paper screen to show dot location.  This could could then be easily battery powered for testing.

Constraints:

  • Merging a successful prototype into Phoebe requires an ‘A3’ and a rework of the PSU – currently direct feeding 5V into the RPi backfeeds the regulator (which would normally be taking input from the LiPo) causing to heat up significantly
  • None of this can take place before I’ve finished and bagged up the GPIO tutorial for the next Cotswold Jam on 30th April.

The future’s bright; the future’s orange!

Well, tangerine actually!

With the IMU FIFO code taking the pressure off critical timings for reading the IMU, a huge range of options have opened up for what to do next with the spare time the FIFO has created:

  • A simple keyboard remote control is tempting where the QC code polls stdin periodically during a hover phase and amends the flight plan dynamically; ultimately, I’d like to do this via a touch screen app on my Raspberry Pi providing joystick buttons.  However for this to work well, I really need to add further sensor input providing feedback on longer-term horizontal and vertical motion…
  • A downward facing Ultrasonic Range Finder (URF) would provide the vertical motion tracking when combined with angles from the IMU.  I’d looked at this a long time ago but it stalled as I’d read the sensors could only run at up to 100kbps I2C baudrate which would prevent use of the higher 400kbps required for reading the FIFO.  However a quick test just now shows the URF working perfectly at 400kbps.
  • A downward facing RPi camera when combined with the URF would provide horizontal motion tracking.  Again I’d written this off due to the URF, but now it’s worth progressing with.  This is the Kitty++ code I started looking at during the summer and abandoned almost immediately due both to the lack of spare time in the code, and also the need for extra CPU cores to do the camera motion processing; Chloe with her Raspberry Pi B2 and her tangerine PiBow case more than satisfy that requirement now.
  • The magnetometer / compass on the IMU can provide longer term yaw stability which currently relies on just the integrated Z-axis gyro.

I do also have barometer and GPS sensors ‘in-stock’ but their application is primarily for long distance flights over variable terrain at heights above the range of the URF.  This is well out of scope for my current aims, so for the moment then, I’ll leave the parts shelved.

I have a lot of work to do, so I’d better get on with it.