My research into RANSAC so far hasn’t been bountiful; all the examples I’ve found use numpy, and one of the fundamentals is the code is mine to the maximum possible level; also the examples given are about finding lines within a batch of noisy data. I’m after the point, not a line within the noise. I’m not dropping RANSAC, but I’m going to try my own method first.

Like RANSAC, it ignores SAD, and instead builds a dictionary, indexed by the vector X, Y, and the value is a count of the macro-blocks matching the vector key – i.e. the darker spots on the previous post’s graphs..

Having built the dictionary, it’s walked, and for each vector key, the surrounding 8 are checked and these neighbour counts are added to the centre count. The maximum is updated including the vector. Once complete the final maximum count vector is used as the lateral motion tracking vector.

It’s efficient code, and will pick the centre of the most concentrated cluster.

Here’s the result, plotting all the macro-block vectors’ density and the final result.

It’s better than the average I used by a long way. Next step, obviously, is to deploy on Zoe.

Here’s the code if you’re interested:

vector_dict = {} start_time = time.time() #----------------------------------------------------------------------------------- # Built the vector dictionary of scores #----------------------------------------------------------------------------------- for ii in range(0, mbs_per_frame * 3, 3): idy = iframe[ii] idx = iframe[ii + 1] sad = iframe[ii + 2] if idx == 0 and idy == 0: # and sad == 0: continue diags.write("%d, %d\n" % (idx, idy)) if vector_dict.has_key((idx, idy)): vector_dict[(idx, idy)] += 1 else: vector_dict[(idx, idy)] = 1 if len(vector_dict) == 0: print "VD empty - barf" continue #----------------------------------------------------------------------------------- # Now walk the dictionary entries, checking the neighbours scores to find maxima of # clusters #----------------------------------------------------------------------------------- best_score = 0 best_vectors = [] for vector in vector_dict.keys(): for ii in range(-1, 2): for jj in range(-1, 2): if ii == 0 and jj == 0: continue vector_x, vector_y = vector neighbour = (vector_x + ii, vector_y + jj) if vector_dict.has_key(neighbour): vector_dict[vector] += vector_dict[neighbour] if vector_dict[vector] > best_score: best_score = vector_dict[vector] best_vectors = [vector] print "New: " , best_vectors elif vector_dict[vector] == best_score: best_vectors.append(vector) print "Append:" , best_vectors print "Best score: %d" % best_score print "Final: " , best_vectors #----------------------------------------------------------------------------------- # Now we've collected the clusters of the best score vectors in the frame, average #----------------------------------------------------------------------------------- sum_score = 0 sum_x = 0.0 sum_y = 0.0 for vector_x, vector_y in best_vectors: print "Vector %d, %d" % (vector_x, vector_y) vector_score = vector_dict[(vector_x, vector_y)] sum_x += vector_x * vector_score sum_y += vector_y * vector_score sum_score += vector_score best_x = sum_x / sum_score best_y = sum_y / sum_score