I think I’ve finally worked out how the zero-g offset drift against temperature needs doing, or in fact, why it doesn’t .
But first I feel the need to share this set of figures that got me there:
To get the values for these, I booted HoG in a variety of temperature environments, and read the zero-g offsets. “Doesn’t look too bad”, you may think? Except the environments ranged from a beer fridge packed with ice-blocks, through to our warmest room in the house with the heating on full – the graph should show a temperature range greater than 20°C, and yet there’s only 7ºC. And 7ºC is about the temperature rise from chip-boot to 1kHz sampling.
I’ve had concerns previously about how this sensor works – in the spec, the absolute temperature equations uses “ambient” without providing a concise definition of what that means. But assuming ‘ambient’ is an arbitrary value measured at chip-boot, then regardless of the environment (e.g. Arctic vs. Sahara), then the zero-G offsets need to be calibrated against the temperature difference between current temperature sensor output and the ‘ambient’ temperature sensor when first powered up – in this case 7ºC.
And that means that as long as temperature doesn’t drift too much during a flight (i.e. power her up in the environment she needs to fly in), then the offsets are nearly static (according to the spec ±1.5mg/°C or 1.5cm/s/s/ºC) , and can be read in any temperature-stable environments and used in any other environment.
I think that means I need to change my calibration method so read the ambient, and then all changes to ambient while doing a 10s 1kHz sampling run. I’ll try that in my home Arctic and Sahara and see if it holds true.