No breakthroughs to report but:
- Zoe is now running indoors safely with or without motion fusion installed
- Without the fusion, she drifts horizontally and continues to rise during hover phase: this suggests the value for gravity at takeoff has drifted during the flight, perhaps temperature related? It’s only about 15°C in our house currently which is outside the range she works well in. First test is to add a blob of blue tack on the IMU so it isn’t cooled by the breeze from the props.
- With fusion, her height is much better, but she swings laterally around her takeoff point – the Garmin LiDAR lite is doing it’s job well but there’s some tuning required for the lateral motion from the Raspberry Camera. Also it’s dark in the play room, even with the lighting on, so I’m going to add LED lighting under her motors to give the camera better site. She’s flying over an IKEA LEKPLATS play mat, but ours seems very faded, so I’ll be getting her a new one.
- I’ve added a whole bunch of safety trip wires so that, for example, if she’s 50cm above where the flight plan says she should be, the flight dies. Together these make her much safer for flights indoors.
- I’ve added enhanced scheduling to prioritise IMU over camera input when the IMU FIFO is reading half-full; this is to prevent FIFO overflows as camera processing sometimes takes a while, and the overflows have been happening a lot recently.
- I’ve also added another couple of pairs of PIDs – I’m not sure how I got away without them before. The equivalent controls yaw perfectly, but the pitch and roll angles were missing, skipping straight to the rotation rates instead.
- distance (target – input) =PID=> corrective velocity target
- velocity (target – input) =PID=> corrective acceleration target
- acceleration target => angular target (maths to choose an angle for a desired acceleration)
- angle (target – input) =PID=> corrective rotation target
- rotation (target – input) =PID=> PWM output
Together all these changes require a lot of tuning, tinkering and testing; I hope to report back with a video when there’s something worth sharing.
Nothing exciting to report I’m afraid – still waiting for the rain to stop to take the girls out. So I thought I’d show you what they look like. Both are now kitted out with LiDAR and RaspiCam for vertical and lateral distance sampling. They run the same software – the code enables 8 ESC objects if the hostname is hermione.local or 4 otherwise:
As you may have guess, Hermione’s lid is a Salad Bowl from Italy – I spotted one we had and it fitted perfectly, so I got myself an orange one – £5 + £15 shipping to avoid the Italian postal service stealing it in transit. The bowl is concave on the underside (in its role as a salad bowl). Luckily, it’s 10cm diameter matching some black acrylic disks I have (one of which is on Hermione’s underside to which attaches the Garmin LiDAR Lite and the0 RaspiCam. Ultimately, the Scanse Sweep will be attached to this top black disc.
On that train of thought, I need to get a move on as the Scanse Sweep could well arrive before Christmas according to the latest update note.
Courtesy of a discussion with 6by9 on the Raspberry Pi forum, the motion tracking for Zoe is now working using raspivid which has an option to not buffer the macro-block output. Here’s a plot of a passive flight where she moves forward and back (X) a couple of times, and then left and right (Y).
Macro-block vs accelerometer
The plot clearly shows the macro-blocks and accelerometer X and Y readings are in sync (if not quite at the same scale), so tomorrow’s the day to set Zoe loose in the garden with the motors powered up – fingers crossed no cartwheels across the lawn this time!
I flew Zoe over the weekend with the camera motion running, and it was pretty exiting watching her cartwheel across the lawn – clearly there’s more to do in the testing before I try again!
So I did a passive flight in the ‘lab’ just now and got these stats; I had hoped to show a comparison of the accelerometer measured distances vs the camera video distances, but that’s not what I got:
The lower 2 graphs are the interesting ones: the left one shows how bad the integration error is with the accelerometer – all the details in the accelerometer are swamped by integrated offset errors. It also shows that we are only getting data from the video every four seconds.
So I did a test with my raw motion code (i.e. no overheads from the other sensors in the quadcopter code), and it showed those 4 second batches contain 39 samples, so clearly there’s some buffering of the 10 Hz video frame rate as configured.
So next step is to work out how to identify whether it’s the FIFO or the video that’s doing the buffering, and how to stop it!
Working camera tracking
These two show the motion tracking results, and the motion processing loop intervals.
Each spike in the dt (time interval) graph happens at about 10Hz (the frame rate for the video) and shows the camera data processing; this is only triggered when there is camera motion data available, not simply based on a 10Hz timer. As you can see this is there from the start. The horizontal units here are the number of motion processing loops i.e. the spikes start right from the word go, not after 4 seconds as suggested by the top graph.
The silence in the top graph’s first 4 second thus suggests the camera motion-blocks simply are not detecting motion at this point.. The flight plan is a 2 seconds take-off, 12 second hover and 2 second descent, but I’ve had to abort it after 5 seconds due to instability: Zoe is flapping both around pitch (blue) and roll (orange). The instability grows during the flight, so the 4 second point is likely to be where the low resolution camera macro-blocks actually start to see the instability.
Currently, the camera motion processing is still not fed into the PIDs, so the flapping is not caused by the camera motion. Certainly, the next step is to include this in the PIDs and see if this actually works.
Finally, there’s one fly in the ointment: the IMU FIFO overflow triggers every other flight; this is almost certainly not due directly to the camera itself, but to how I’m managing the FIFO and it’s overflow interrupt; first couple of attempts to control this have failed, so I’ll have to keep stabbing in the dark.
Zoe the videographer
You can just see the brown video cable looping to the underside of the frame where the camera is attached.
She’s standing on a perspex box as part of some experimenting as to why this happens:
Lazy video feed
It’s taking at least 4.5 seconds before the video feed kicks in (if at all). Here the video data is only logged. What’s plotted here is the cumulative distance; what’s shown is accurate in time and space, but I need to investigate further why the delay. It’s definitely not to do with the starting up of the camera video process – I already have prints showing when it starts and stops, and those happen at the right time; it’s either related to the camera processing itself, or how the FIFO works. More anon as I test my ideas.
Back from DisneyLand where it was 35°C in the shade. It actually turned out to be fun, even cool at times, and gave me plenty of thinking time, the net result of which is I’ve changed the main scheduling loop which now
- polls the IMU FIFO to check how many batches of sensor data are queued up there; the motion processing runs every 10 batches
- if the IMU FIFO has less than 10 batches, select.select() is called, listening on the OS FIFO of the camera macro-block collection process; the timeout for the select.select() is based upon the IMU sampling rate, and the number of IMU FIFO batches required to reach 10.
- The select.select() wakes either because
- there are now >=10 batches of IMU FIFO data present, triggering motion processing
- there are macro-block data on the OS FIFO, which updates the lateral PID distance and velocity input.
Even without the camera in use, this improves the scheduling because now the motion processing happens every 10 batches of IMU data, and it doesn’t use time.sleep() whose timing resulted in significant variation in the number of IMU FIFO batches triggering motion processing.
I’m taking this integration carefully step by step because an error could lead to disastrous, hard to diagnose behaviour. Currently the camera FIFO results are not integrated with the motion processing, but instead are just logged. I hope during the next few days I can get this all integrated.
Note that due to some delivery problems, this is all being carried out on Zoe with her version 2 PiZero.
Update: Initial testing suggests a priority problem: motion processing is now taking nearly 10ms means the code doesn’t reach the select.select() call, but instead simply loops on motion processing. This means that when finally the OS FIFO of macro-blocks gets read, there are possibly several sets, and they are backed up and out-of-date. I’ll change the scheduling to prioritize reading OS FIFO and allow the IMU FIFO to accumulate more samples.
Another walk up the side of the house, but then walking a square as best I could, finishing where I started, and as you can see, the camera tracked this amazingly well – I’m particularly delighted the start and end points of the square are so close. Units are pretty accurate too.
I’m now very keen for Hermione’s parts to arrive, as I suspect this is going to work like a dream, both stabilising long term hover, and also allowing accurate traced flight plans with horizontal movement. Very, very excited!
Shame about the trip to DisneyLand Paris next week – I’m not going to get everything done before then, which means Disney is going to be more of a frustrating, annoying waste of my time than usual!
Phoebe’s delicate underside
1. ɐɹǝɯɐƆ + Ⅎɹ∩ sʌ ɹ∀ppƎ˥
2. uoᴉʇɐʇoɹ ǝǝɹƃǝp 06
3. ɐuuǝʇuɐ oN
4. sƆSƎ pǝƃƃnldu∩
Phoebe’s delicate underside
Phoebe has got the PiCamera and the SRF02 Ultrasonic Range Finder installed on her underside; legs are back in place to achieve both camera focus and URF minimum range on the ground.
Camera’s working fine though the software is proving tricky for using the camera to do laser dot following or motion tracking MP4 encoding.
URF isn’t working; i2cdetect -y 1 sees the sensor, but a write to send an ultrasonic ping just blocks. There’s a couple of possible causes: the URF is running on 3.3V not the 5V as defined by the spec, and the I2C bus is running at 400kbps instead of the URF’s supported 100kbps.
I can’t drop the I2C baudrate – I have 1000 batches of 12 byte samples to read each second from the IMU FIFO and even ignoring the I2C overhead that requires 96kbps – not a cat in hell’s chance of running the IMU and the URF together at that baudrate.
The voltage is fixable with some 2-way level adjustors like these, but these are going to need a PCB rework.
Stuck again for the moment.
*much like a hedgehog