A neuron’s intercept can roughly be thought of the selectivity of a neuron. The higher you set the intercept, the less likely the neuron is going to be activated by an input. I remember @arvoelke mentioning something about using `nengo.dists.CosineSimilarity(input_dimensions)`

, but I can’t remember the derivation for this.

Does the documentation not answer your question?

`help(nengo.dists.CosineSimilarity)`

class nengo.dists.CosineSimilarity(dimensions)

Distribution of the cosine of the angle between two random vectors.

The “cosine similarity” is the cosine of the angle between two vectors, which is equal to the dot product of the vectors, divided by the L2-norms of the individual vectors. When these vectors are unit length, this is then simply the distribution of their dot product.

This is also equivalent to the distribution of a single coefficient from a unit vector (a single dimension of UniformHypersphere(surface=True)). Furthermore, CosineSimilarity(d+2) is equivalent to the distribution of a single coordinate from points uniformly sampled from the d-dimensional unit ball (a single dimension of UniformHypersphere(surface=False).sample(n, d)). These relationships have been detailed in [Voelker201723].

This can be used to calculate an intercept c = ppf(1 - p) such that dot(u, v) >= c with probability p, for random unit vectors u and v. In other words, a neuron with intercept ppf(1 - p) will fire with probability p for a random unit length input.

It does, I just didn’t think to check there. Thanks.