Sorry it’s been quiet; the weather’s been awful, so no videos to show. Instead, I’ve been tinkering to ensure the code is as good as it can be prior to moving on to object avoidance / maze tracking.
Zoe is back to life once more to help with the “face the direction you’re flying” tweak testing as these don’t quite work yet. She’s back as my first few attempts with ‘H’ kept breaking props. First job for ‘Z’ was to have her run the same code as Hermione but with the autopilot moved back to inline to reduce the number of processes as possible for her single CPU Pi0W in comparison with Hermione’s 4 CPU 3B.
- I’ve started running the code with ‘sudo python -O ./qc.py’ to enable optimisation. This disable assertion checking, and hopefully other stuff for better performance.
- I’ve tweaked the Butterworth parameters to track gravity changes faster as Zoe’s IMU is exposed to the cold winds and her accelerometer values rise rapidly.
- I’ve refining the Garmin LiDAR-Lite V3 to cope with occasional ‘no reading’ triggered caused by no laser reflection detected; this does happen occasionally (and correctly) if she’s tilted and the surface bouncing the laser points the wrong way.
- I’d also hoped to add a “data ready interrupt” to the LiDAR to reduce the number of I2C requests made; however, the interrupts still don’t happens despite trying all 12 config options. I think the problem is Garmin’s so I’m awaiting a response from them on whether this flaw is fixed in a new model to be launched in the next few weeks . In the meantime, I only call the GLL I2C when there’s video results which need the GLL vertical height to convert the video pixel movements into horizontal movement in meters.
Having added and tested all the above sequentially, the net result was failure: less bad a failure than previously, but failure nonetheless; the video tracking lagged in order to avoid the IMU FIFO overflowing. So in the end, I changed Zoe’s video resolution to 240 pixels² @ 10 fps (‘H’ is at 320 pixel² @ 10 fps, and she now can hover on the grass which means I can get on with the “face where you’re going” code.
I do think all the other changes listed are valid and useful, and as a result, I’ve updated the code on GitHub.
In passing, I had also been investigating whether the magnetometer could be used to back up pitch, roll and yaw angles long term, but that’s an abject failure; with the props on idle prior to takeoff, it works fine giving the orientation to feed to the GPS tracking process, but once airborne, the magnetometer values shift by ±40° and varies depending which way she’s going while facing in the same direction.
My NEO-M8T is on its way back to France where DroTek want to check it out: they tell me they routinely get 15 satellites there, whereas in the perfect conditions here this morning, I only got 8!
In the meantime, I’m going to do some tinkering on my list of things to do when I have spare time. Before I do that, I’ve dropped the latest working code on to GitHub.
I think the problem with stability was a combination of Hermione’s new long legs’ weight distribution, combined with (perhaps) cold LiPo power. Putting her back to the body layout above, combined with running the LiPo over a 33Ω resistor (0.5A at 4s cells ≈ 16V = 8W when she’s not plugged in) and normal behaviour has resumed. Due to some diagnostics tweaks, the code is updated on GitHub.
Once last brain dump before next week’s torture in Disneyland Paris: no, not crashing into inanimate objects; quite the opposite: Simultaneous Location And Mapping i.e. how to map obstacles’ location in space, attempting to avoid them initially through random choice of change in direction, mapping both the object location and the trial-and-error avoidance and in doing so, feeding backing into future less-randomized, more-informed direction changes i.e. a.i.
My plan here, as always, ignores everything described about standard SLAM processes elsewhere and does it my way based upon the tools and restrictions I have:
- SLAM processing is carried out by the autopilot process.
- GPS feeds it at 1Hz as per now.
- Sweep feeds it every time a near obstacle is spotted within a few meters – perhaps 5?
- The map is 0.5m x 0.5m resolution python dictionary indexed by integer units of 1,1 (i.e. twice the distance GPS measurement) into whose value is a score (resolution low due to GPS accuracy and Hermione’s physical size of 1m tip to tip)
- GPS takeoff location = 0,0 on the map
- During the flight, each GPS position is stored in the map location dictionary with a score of +100 points marking out successfully explored locations
- Sweep object detection are also added to the dictionary, up to a limited distance of say 5m (to limit feed from Sweep process and ignore blockages too far away to matter). These have a score of say -1 points due to multiple scans per second, and low res conversion of cm to 0.5m
- Together these high and low points define clear areas passed through and identified obstructions respectively, with unexplored areas having zero value points in the dictionary.
- Height and yaw are fixed throughout the flight to local Sweep and GPS orientation in sync.
- The direction to travel within the map is the highest scoring next area not yet visited as defined by the map.
The above code and processing is very similar to the existing code processing the down facing video macro-blocks to guess the most likely direction moved; as such, it shouldn’t be too hard to prototype. Initially the map is just dumped to file for viewing the plausibility of this method in an Excel 3D spreadsheet.
P.S. For the record, despite autonomous GPS testing being very limited, because the file-based flight plan works as well or better than the previous version, I’ve unloaded the latest code to GitHub.
Whether ’tis easier for the macro-blocks to track
The clovers and daisies of contrasting colour..”
The answer is no, I shouldn’t have mown the lawn. With the kids’ toys moved out of the way, and any ungreen contrasting features slain, there was nothing to distinguish one blade of shorn grass from another and Hermione drifted wildly. Reinstating the kids’ chaos restored high quality tracking over 5m.
The point of the flight was to compare GPS versus macro-block lateral tracking. Certainly over this 5m flight, down-facing video beat GPS hands down:
My best guess interpretation of the GPS graph is that the flight was actually from the 2 – 7m diagonally heading north west. The camera POV doesn’t include compass data, so it’s correctly showing her flying forwards by 5m. The compass code is not working accurately yet – it needs more investigation why not – it was showing ~90° (i.e. East) rather than the true 45° (i.e. North East) shown by the GPS and a handheld compass.
I’ve done some more refinements to scheduling the sensor reads, and also accuracy of GPS data streamed from the GPS process. It’s worth viewing this graph full screen – each spike shows the time in seconds between motion processing loops – i.e. the time spent processing other sensors – 10ms indicates just IMU data was processed. The fact no loop takes more than 0.042s* even with full diagnostics running means I could up the sampling rate back to 1kHz – it’s at 500Hz at the moment. More importantly, it shows processing is nicely spread out and each sensor is getting it’s fair share of the processing and nobody is hogging the limelight.
As a result, I’ve updated the code on GitHub.
*42ms is the point where the IMU FIFO overflows at 1kHz sampling – 512 FIFO size / 12 bytes sample size / 1kHz sampling rate
First, the result: autonomous10m linear flight forwards:
You can see her stabilitydegrade as she leaves the contrasting shadow area cast by the tree branches in the sunshine. At the point chaos broke loose, she believed she had reached her 10m target and thus she was descending; she’s not far wrong – the start and end points are the two round stones placed a measured 10m apart to within a few centimetres.
So here’s what’s changed in the last week:
As a result of all the above, I’ve updated GitHub.
The sun wasn’t shining brightly, so no high contrast on the lawn; the lawn had been mowed, removing the contrasting grass clumps too. Yet she still did a great attempt at a 1m square. I think this is about as good as she can get – it’s time for me to move on to adding compass, GPS and object avoidance. The code as been updated on GitHub.
are lacking yet this spring, and having mown the lawn yesterday, features are hard to find for the video lateral tracking. So I think this is a pretty good 37s hover. In fact, I think it’s as good as it can be until the daisies start sprouting:
This is with a frame size of 640² pixels. There’s an check in the code which reports whether the code keeps up with the video frame rate. At 640² it does; I tried 800² and 720² but the code failed to keep up with the video frame rate of 20fps.
As a result, I’ve uploaded the changes to GitHub. There’s work-in-progress code there for calibrating the compass “calibrateCompass()”, although that’s turning out to be a right PITA. I’ll explain more another time.
As a side note, my Mavic uses two forward facing camera to stereoscopically track horizontal movement, combined with GPS and a corresponding ground facing pair of cameras and the IMU accelerometer integration, yet if you watch the frisbee / baseball bat to the left, even the Mavic drifts.
Up to now, I’ve deferred adding code to GitHub until a particular development phase is over. However, so much has changed in the past few months, I ought to share. Features:
- X8 optional configuration
- more precise and efficient scheduling using select() allowing for extra sensors…
- LEDDAR – fully tested
- PX4FLOW – tested to the extent the quality of the PX4FLOW allows
- ViDAR – testing in progress
- Garmin LIDAR-Lite V3 – arrival imminent
- Compass – tested but unused except for logging
- Fusion – tested, but each sensors source requires different fusion paramters
The compass function is unused except for logging. The ViDAR and Fusion features require at least a height sensor and further calibration. Therefore, I strongly recommend setting
self.camera_installed = False
unless you want to see how well it isn’t working yet.
Your can enable logging for the ViDAR stats without including them in the Fusion by setting the above to True and also setting these two variables to False:
# Set the flags for horizontal distance and velocity fusion
hdf = True
hvf = True
This code comes with absolutely no warranty whatsoever – even less than it normally does. Caveat utilitor.