Fusion flaw

I collected some diagnostics to start investigating the wobbles shown in the previous post’s video:

Fused velocities

Fused velocities

It’s fairly clear the fusion bias is still strongly towards the accelerometer.  In a way I’m glad as it makes my next steps very clear.

4 thoughts on “Fusion flaw

  1. How interesting. Sorry if I’m a nuisance with the questions. I still have a few! I want to build my own Tricopter with a bunch of sensors (in my mind, a Tricopter is more stable in the air than a Quad, although I’m not sure this is the case because I see Quads out there more than Tris, and haven’t worked out the pros/cons).

    1. If you add an additional accelerometer or LiDAR, and use the same merging technique, does the estimate become even better (precise?) and higher resolution (sample-rate-wise)? Is higher resolution and more precise estimates in fact desirable, or is the fusion of two enough for this purpose (flying a Tri/Quad autonomously)? Is the accelerometer analogue or digital? If digital, what is its sampling rate? (Or does it become digital by virtue of the RPi sampling it)

    2. Usually from my limited knowledge of how sensors work, the estimate of, say, altitude comes back with a Gaussian error estimate. Is this important at all when dealing with the control sequence of, say the rotors? Since I can’t see “error intervals” in the estimates of the LiDAR or accelerometer or fusion, I imagine they musn’t be in this application. But I imagine they must be for the filtering technique you use? Is this true? Are they inputs for the fusion technique? So that the fused output is a function of estimate 1, estimate 2, error 1 and error 2?

    3. I’m trying to work out the feedback mechanisms to the rotors. Did you use Control techniques to accelerate the rotors or decelarate them, depending on, say, the altitude fusion from the sensors (as part of a feedback mechanism)? Or can I get away with using say, capacitors to smooth out digital changes in voltage? For example, I may have the sequence 1 0 0 0 0 0 … because at the first position I detected I was “over” the flight plan altitude, and so I send off switches thereafter until it is corrected.

    Again, sorry for so many questions! Any orientation you can give me will be of great help!

    • You’re not being a nuisance. It’s very lonely blogging when there’s no sign anyone is interested! I went for quad simply because I didn’t even know about tri’s when I started this. Do tri’s have two props on the tail (one CW one CCW) to control yaw, and the tri always flies forwards controlled mostly by the front pair?

      In the depths, the accelerometer (Invensense MPU-9250) is analogue, but it’s sampled / digitized so I read it over I2C. Sampling as a 1kHz which is high enough. The problem is the accelerometer covers levels of up to >2G in 2 bytes (65536) on takeoff, so resolution is 1mm/s/s. Sounds small until you start double integrating it to get distances. That’s why you need a range finder (URF or LiDAR) for longer term accuracy.

      I’ve not looked at Gaussian error estimation at all. So far, I’ve not had any need to.

      The motors are controlled by off the shelf ESCs – PWM to 3 phase motor power converters. PWM signal is 1-2ms at 1us resolution. 1ms pulse is stop, 2ms pulse is full power. The PWM value needed on the 4 props is the output of the feedback loop of the sensors. Commonly a PID (proportional, integral, differential) is used. These need arbitrary tuning rather than having to model the QC accurately.

      Have a look at my code to see more: https://github.com/PiStuffing/Quadcopter/blob/master/Quadcopter.py

  2. Hi! By “fusion” I think you mean you (mathematically) filter the inputs in some way, but it is not clear to me at all how you accomplish this. Would the bias toward the accelerometer be because the accelerometer readings are more precise, thus, your fusion leans in that direction? How would you judge accuracy vs precision using the fusion or the inputs from the sensors?

    Thanks!

    • Yes, what I’m calling fusion is more correctly called a complementary filter but fusion is a much cooler word and shorter too. Essentially it takes two sources of the same information, and merges them together. In this case it’s the distance from (double intergrated acceleration dt) merged with the LEDDAR ground facing LiDAR. The merge bias is based on time increments. Short term, the accelerometer readings dominate, but over longer term, any error in the acceleration double integrates up to being a big distance error. The LiDAR only the other hand is very ‘lumpy’ as it’s only sampled at 10Hz, but is always accurate. So together we get the best quality short and long term, high resolution, accurate samples

Leave a Reply

Your email address will not be published. Required fields are marked *