OK, so I think I’ve got my head around this. These are the readings taken from the beer fridge packed with ice blocks.
Clearly the absolute temperature readings are rubbish, but also clearly, the temperature range of the test is only about 1.5°C. If you look at yesterday’s graph, these numbers above are the tight little clusters of samples around the 25 – 26°C. The fact they are tight clusters is because the ice-laden beer fridge was able to hold the chip temperature stable while the testing went on.
Outside of such a temperature controlled environment, the chip temperature changes and so you get the spread of offsets shown in the graph.
So what does this mean?
- first, for every flight, read the garbage “ambient” value as soon as the chip is powered up – ideally this means boot in the same environment the flight is going to happen
- throughout the flight, track the changes in temperature – i.e. (raw temperature – “ambient) and use this value to work out the change in the offsets due to temperature drift.
The next step is calibrating offsets against the difference from ambient. It’s going to take something like booting HoG and reading ambient, and then have her hard loop reading gravity in that environment whilst periodically logging the change in (temperature – ambient) versus offset.
I’ll report when I’ve got some data.